AY 2026 Undergraduate School Course Catalog

Applications

2026/02/19

Open Competency Codes Table Back

開講学期
/Semester
2026年度/Academic Year  3学期 /Third Quarter
対象学年
/Course for;
3rd year
単位数
/Credits
4.0
責任者
/Coordinator
THENUWARA HANNADIGE Akila Sanjaya.S
担当教員名
/Instructor
SHIN Jungpil, PAIK Incheon, THENUWARA HANNADIGE Akila Sanjaya.S, TBD-4
推奨トラック
/Recommended track
先修科目
/Essential courses
Courses preferred to be learned prior to this course (This course assumes understanding of the entire or partial content of the following courses)

MA01 Linear Algebra I
MA02 Linear Algebra II
MA03 Calculus I
MA04 Calculus II
MA07 Probability and Statistics
FU08 Automata and Languages
更新日/Last updated on 2026/02/02
授業の概要
/Course outline
This course lays the foundation for "Artificial Intelligence" with essential topics of principles and techniques that bridge the traditional-to-modern on-demand syllabus. It follows the traditional AIMA (Artificial Intelligence: A Modern Approach, Russell and Norvig) teaching/learning approach, with minor revisions, adopting topics in language modeling.

At the end of the course, students will be able to;

- Formulate AI problems with clear state/feature representations and assumptions.
- Implement and analyze core search and adversarial search methods (e.g., BFS/ DFS/ UCS/ A*, minimax).
- Represent knowledge with logic/KR and perform basic inference; connect KR choices to planning needs.
- Solve structured problems using CSP techniques and classical planning representations/algorithms.
- Model uncertainty with graphical models and perform probabilistic inference (including approximate inference).
- Train and evaluate supervised learning and deep learning models, reasoning about generalization.
- Formulate MDPs and apply core RL ideas (planning via value iteration; learning via Monte Carlo), recognizing RLHF’s role and risks.
- Explain the transformer-based LLM pipeline (pretraining -> alignment -> evaluation) and articulate practical limitations, failure modes, and system constraints.
授業の目的と到達目標
/Objectives and attainment
goals
[Corresponding Learning Outcomes]

This course is planned with 7 main modules and 14 lectures.

The overall objective is to provide students with a foundation in classical and modern AI, how to model problems, represent knowledge, reason under uncertainty, learn from data, make sequential decisions, and understand modern language/foundation models, while applying responsible-AI principles throughout.

Each module sets its own objectives to support the course's overall plan as follows. Module #, name (lecture #s), objectives, and a brief introduction are as follows.

Module 1 - Course Introduction (Lectures 1):
Goal: Direct students to the AI landscape (classical and modern), outline expectations for assignments, and brief on responsible AI.
Brief Introduction: Establish shared vocabulary for “what is AI,” course expectations, and responsible-AI framing for all subsequent modules.

Module 2 - Problem Solving by Search (Lectures 2-3):
Goal: Teach students to formulate problems as search, implement/analyze core algorithms (BFS/DFS/UCS/A*), and introduce adversarial search (minimax) as the foundation for later CSP, planning, and RL.
Brief Introduction: Develop problem-formulation skills and the core algorithmic toolkit for search (uninformed, optimal, and heuristic) and adversarial decision-making.

Module 3: Knowledge, Reasoning, and Planning (Lectures 4-5):
Goal: Teach how to represent knowledge (logic/KR, ontologies), perform basic inference (reasoning), and solve structured problems via CSP and classical planning, forming the conceptual bridge from search to decision making under uncertainty (MDPs/RL).
Brief Introduction: Connect representations (logic/KR/ontologies) to inference, then to structured solving via CSP and classical planning.

Module 4: Uncertain Knowledge and Reasoning (Lectures 6-7):
Goal: Equip students to represent uncertainty with graphical models (Markov networks and Bayes nets) and to run practical inference (including approximate inference via Gibbs sampling), with a short, applied introduction to parameter learning.
Brief Introduction: Introduce uncertainty-aware modeling with graphical models and practical inference/learning tools.

Module 5: Learning from Data (Lectures 8-10):
Goal: Equip students with the core supervised-learning workflow models (linear, logistic/softmax, MLPs), losses/optimization (training with minibatches/backprop), and evaluation/generalization- needed for downstream RL/LLM topics and course projects.
Brief Introduction: Build the supervised-learning workflow from linear models to neural networks, with emphasis on optimization, evaluation, and generalization.

Module 6: Learning and Planning from Rewards: Sequential Decision Making (Lectures 11-12):
Goal: Teach sequential decision making via MDPs (modeling + policy evaluation), planning with dynamic programming (value iteration), and model-free RL (Monte Carlo), with a concise bridge to RLHF and its practical risks.
Brief Introduction: Develop sequential decision-making foundations from model-based planning to model-free learning, plus a modern bridge to RLHF.

Module 7: Language & Foundation Models (Lectures 13-14):
Goal: build a coherent modern foundation from the language-modeling objective to transformers, then connect to the end-to-end LLM pipeline (pretraining -> instruction tuning/RLHF -> evaluation, limitations, and system constraints).
Brief Introduction: Provide modern LLM foundations (LM objective, transformers) and connect them to training, alignment, evaluation, and limitations.


[Competency Codes]
C-IS-001, C-IS-002-2, C-IS-003, C-IS-004-1, C-IS-005-2
授業スケジュール
/Class schedule
Lecture 1: Introduction to AI
- What is AI? and brief history
- Related research fields
- Course flow (preparation for assignments)
- Responsible AI (Society, Economics, Ethics & Responsibility)
[in-class] 4.25-hrs, [review] 1.5-hrs, [exercise] 2.0-hrs, [preparation] 1.5-hrs -> 9.25-hrs
Focus: in-class framing; light prep; exercise setup.

Lecture 2: Search I: Problem Formulation & Uninformed Search
- Tree search: Basic exhaustive search, BFS, DFS
- Dynamic programming
[in-class] 4.25-hrs, [review] 1.5-hrs, [exercise] 3.0-hrs, [preparation] 1.5 -> 10.25-hrs
Focus: exercises = tracing/implementing BFS/DFS/DP patterns.

Lecture 3: Search II: Heuristic & Adversarial Search
- Uniform cost search & UCS correctness
- A*
- Minimax  Minimax Algorithm
[in-class] 4.25-hrs, [review] 1.5-hrs, [exercise] 3.5-hrs, [preparation] 2.0-hrs -> 11.25-hrs
Focus: more prep + exercises for A*/heuristics + minimax/alpha-beta.

Lecture 4: Knowledge Representation & Logical Reasoning (Logic and Ontologies)
- Big picture pipeline: Represent > Reason > Plan (what and when)
- Logic essentials for KR & reasoning extras
- Ontology basics (KR-focused)
- Ontology in practice
[in-class] 4.25-hrs, [review] 2.0-hrs, [exercise] 3.0-hrs, [preparation] 2.0-hrs -> 11.25-hrs
Focus: prep/review to stabilize notation; exercises in modeling (English -> KR).

Lecture 5: Constraint Satisfaction & Classical Planning (CSP and Planning)
- CSP extensions
- Planning algorithms & representations
- Planning under uncertainty pointer (bridge to MDP/RL)
[in-class] 4.25-hrs, [review] 2.0-hrs, [exercise] 6.0-hrs, [preparation] 2.0-hrs -> 14.25-hrs
Focus: heavy exercises (CSP solving + planning-as-search formulations).

Lecture 6: Probabilistic Reasoning I: Markov Networks & Approximate Inference
Markov Networks and Approximate Inference
- Markov Network Overview,  Factorization
- Conditional Independence in Markov Networks
- Gibbs Sampling for Marginals
[in-class] 4.25-hrs, [review] 2.0-hrs, [exercise] 5.0-hrs, [preparation] 2.0-hrs -> 13.25-hrs
Focus: exercises running/diagnosing sampling and interpreting marginals.

Lecture 7: Probabilistic Reasoning II: Bayesian Networks & Inference (and Learning)
Bayesian Networks: Factorization, Independence, and Inference
- Bayesian Network Overview
- Conditional Independence in Bayes Nets (dseparation intuition)
- Inference (pick one main method)
- Parameter Learning (quick; practical)
[in-class] 4.25-hrs, [review] 2.0-hrs, [exercise] 4.0-hrs, [preparation] 2.0-hrs -> 12.25-hrs
Focus: exercises on d-separation + variable elimination workflows.

Lecture 8: Supervised Learning I: Linear Models (Regression,  Classification)
- Linear regression
- Linear classification
[in-class] 4.25-hrs, [review] 2.0-hrs, [exercise] 5.0-hrs, [preparation] 2.0-hrs -> 13.25-hrs
Focus: exercises implementing GD + interpreting loss/fit.

Lecture 9: Supervised Learning II: Logistic/Softmax; Losses & Generalization
- Logistic/softmax models
- Losses, Regularization, Generalization
- Evaluation basics
- Unsupervised learning (quick overview): clustering,  dimensionality reduction (motivation and when)
[in-class] 4.25-hrs, [review] 2.0-hrs, [exercise] 5.0-hrs, [preparation] 2.0-hrs -> 13.25-hrs
Focus: exercises on evaluation, CV, regularization, and clustering.

Lecture 10: Deep Learning: MLPs; Backpropagation & Optimization
- MLP,  activations,  outputs
- losses,  minibatches
- backprop,  optimizers (main chunk)
- generalization tools (dropout/early stopping/etc.)
- training diagnostics,  wrap
[in-class] 4.50-hrs, [review] 2.0-hrs, [exercise] 7.0-hrs, [preparation] 3.0-hrs -> 16.50-hrs
Focus: extra prep + heavy exercises for backprop/training/debugging.

Lecture 11: Markov Decision Processes: Modeling and Policy Evaluation
- Modeling: Defining MDPs; Dice game; transportation problem.
- Policy evaluation: Policy evaluation; discounting factor.
[in-class] 4.25-hrs, [review] 2.0-hrs, [exercise] 3.0-hrs, [preparation] 2.0-hrs -> 11.25-hrs
Focus: exercises formulating MDPs + computing values/policies.

Lecture 12: Reinforcement Learning: From MDP Planning to Monte Carlo (and RLHF Overview)
- Value iteration
- Modelfree Monte Carlo
- Summary bridge to TD/Qlearning
- RLHF pipeline overview,  risks
[in-class] 4.50-hrs, [review] 2.0-hrs, [exercise] 8.0-hrs, [preparation] 3.0-hrs -> 17.50-hrs
Focus: heaviest exercise block (Monte Carlo RL + comparisons; RLHF pipeline literacy).

Lecture 13: From Language Modeling to Transformers
- ngrams > neural LMs
- tokens,  embeddings
- attention/self-attention,  masking (main chunk)
- transformer block,  positional info
- scaling,  constraints,  wrap
[in-class] 4.25-hrs, [review] 2.0-hrs, [exercise] 4.0-hrs, [preparation] 2.0-hrs -> 12.25-hrs
Focus: exercises tracing attention on small examples + conceptual checks.

Lecture 14: LLMs as Foundation Models
- pipeline
- pre vs post training
- instruction tuning
- RLHF/RLVR conceptual
- testtime scaling
- systems constraints,  evaluation/failures wrap
[in-class] 4.25-hrs, [review] 2.0-hrs, [exercise] 6.0-hrs, [preparation] 2.0-hrs -> 14.25-hrs
Focus: exercises analyzing pipelines, evaluation, failure modes, and tool/RAG scenarios.
教科書
/Textbook(s)
Recommended textbooks are not essential. However, for further exploration (advanced topics or another perspective on the same material), the following resources are recommended. Please note that there may be slight differences in terminology used between textbooks.

[1] Russell and Norvig. Artificial Intelligence: A Modern Approach. This will be a comprehensive reference for the topics we cover.
[2] Koller and Friedman. Probabilistic Graphical Models. Covers factor graphs and Bayesian networks
[3] Sutton and Barto. Reinforcement Learning: An Introduction. Covers Markov decision processes and reinforcement learning (free online).
[4] Hastie, Tibshirani, and Friedman. The Elements of Statistical Learning. Covers machine learning from a rigorous statistical perspective (free online).
[5] Afshine Amidi and Shervine Amidi. Super Study Guide: Transformers & Large Language Models
成績評価の方法・基準
/Grading method/criteria
Final Exam: 60
Quiz: 10
Report: 20
Program: 20
履修上の留意点
/Note for course registration
It is highly recommended to take calculus, linear algebra, probability and statistics, and automata and languages before starting this course. In addition, the demos and exercises in this course are mainly based on Python and encourage students to be prepared in Python programming.
参考(授業ホームページ、図書など)
/Reference (course
website, literature, etc.)
Please note that recommended textbooks are not essential.
[1] Russell and Norvig. Artificial Intelligence: A Modern Approach. This will be a comprehensive reference for the topics we cover.
[2] Koller and Friedman. Probabilistic Graphical Models. Covers factor graphs and Bayesian networks
[3] Sutton and Barto. Reinforcement Learning: An Introduction. Covers Markov decision processes and reinforcement learning (free online).
[4] Hastie, Tibshirani, and Friedman. The Elements of Statistical Learning. Covers machine learning from a rigorous statistical perspective (free online).
[5] Afshine Amidi and Shervine Amidi. Super Study Guide: Transformers & Large Language Models


Open Competency Codes Table Back

開講学期
/Semester
2026年度/Academic Year  2学期 /Second Quarter
対象学年
/Course for;
3rd year
単位数
/Credits
3.0
責任者
/Coordinator
FAYOLLE Pierre-Alain
担当教員名
/Instructor
FAYOLLE Pierre-Alain, NISHIDATE Yohei, TAKAHASHI Shigeo, HIRATA Naru
推奨トラック
/Recommended track
先修科目
/Essential courses
更新日/Last updated on 2026/01/09
授業の概要
/Course outline
The computer graphics (CG) course teaches techniques used for creating, manipulating and animating images of three-dimensional objects by computers.
CG techniques and algorithms are used in fields such as:
* CAD (Computer-aided design): mechanical design, architectural design and circuit design, rapid prototyping.
* Entertainment: film production, animation, and games.
* Virtual Reality: flight simulation, operation, and support.
* Visualization: results of simulation, information visualization.
授業の目的と到達目標
/Objectives and attainment
goals
[Corresponding Learning Outcomes]
(A)Graduates are aware of their professional and ethical responsibilities as an engineer, and are able to analyze societal requirements, and set, solve, and evaluate technical problems using information science technologies in society.

[Competency Codes]
C-GV-004, C-GV-005, C-GV-006, C-GV-007

Course Objectives:
This course provides an introduction to computer graphics.
It provides algorithmic and mathematical foundations for the main components of computer graphics: modeling, rendering and animation.
It provides a practical introduction to implementing these algorithms including programming using the OpenGL library.
授業スケジュール
/Class schedule
Professor Fayolle:
1) Introduction to CG and OpenGL
2) Description of the 3D viewing pipeline
3) Geometric transformations, projective transformations
4) Illumination model, lighting and shading
5) Parameterization, texture mapping
6) Animation (skeleton based, kinematics)
7) Animation (physics-based)
8) Ray-casting and ray-tracing
9) Rasterization, hidden surface removal, compositing
10) Geometric modeling (surface/solid modeling)
11) Geometric modeling (polygon mesh processing)
12) Geometric modeling (parametric curves and surfaces)
13) GPU programming (GLSL, vertex and fragment shaders, ...)
14) GPU programming (Advanced shaders)

Professor Takahashi:
1) Guidance
2) Solid modeling
3) Boundary representations
4) 2D geometric transformations
5) 3D geometric transformations
6) Viewing transformations
7) Hidden surface removal
8) Shading
9) Mapping
10) Animation
11) Ray tracing
12) Free-form curves and surfaces
13) Non-photorealistic rendering
14) Fundamentals of GPU programming

[Preparation/Review] Before each class, students should prepare by studying the lecture materials and corresponding readings for the content indicated in the course plan. The corresponding programming tasks may need also to be completed outside of the exercise classes if necessary. The typical preparation/review time per session is around 6 hours.
教科書
/Textbook(s)
Slides and reading materials will be provided by the instructors.
成績評価の方法・基準
/Grading method/criteria
* Exercises, homework, and quizzes: 40%
* Final examination: 60%
履修上の留意点
/Note for course registration
Familiarity with the following subjects is expected:
* Calculus, linear algebra
* Programming
参考(授業ホームページ、図書など)
/Reference (course
website, literature, etc.)
Course website (Professor Fayolle):
https://web-int.u-aizu.ac.jp/~fayolle/teaching/cg/index.html


Open Competency Codes Table Back

開講学期
/Semester
2026年度/Academic Year  2学期 /Second Quarter
対象学年
/Course for;
4th year
単位数
/Credits
3.0
責任者
/Coordinator
YAGUCHI Yuichi
担当教員名
/Instructor
YAGUCHI Yuichi, WANG Zhishang
推奨トラック
/Recommended track
先修科目
/Essential courses
更新日/Last updated on 2026/02/05
授業の概要
/Course outline
“Seeing is believing.”
Images play a fundamental role in everyday communication and in accumulating knowledge. With the widespread use of mobile devices, anyone can easily capture and share images, and the Web provides access to an enormous volume of visual information. At the same time, technologies that go beyond simply “viewing” images—namely, computational methods that automatically process images, extract structures and features, and, when needed, connect them to recognition and understanding—are increasingly important as foundations for real-world deployment and industrial applications.

Image processing is the set of techniques for appropriately transforming images acquired by cameras and extracting useful information from them. This course focuses on Low-Level Vision as the foundation of digital image processing. Topics include image representation and I/O, intensity transformations, spatial filtering (convolution) and morphological processing, frequency-domain processing (FFT-based spectrum analysis and filter design, ringing artifacts), color processing, transforms such as wavelets, segmentation, feature extraction (e.g., SIFT), and basic concepts of compression.

In each exercise session, students implement and apply methods using Python (OpenCV). They visualize results (output images, histograms, spectra, difference images) and compare outcomes using quantitative metrics when appropriate. In addition, the course assigns multiple report-based assignments (A1–A7) to cultivate practical understanding and application skills through designing, comparing, and evaluating image-processing pipelines.
授業の目的と到達目標
/Objectives and attainment
goals
[Corresponding Learning Objectives]
(A) Be able to define, solve, and evaluate technical problems
(B) Be able to learn proactively throughout one’s lifetime
(C) Be able to apply expertise in mathematics, natural science, and information technology—and scientific thinking—to problem solving

[Competency codes]
C-AL-004-1, C-AL-005-2, C-AL-006-2, C-GV-007, C-GV-004, C-GV-006, C-GV-005

[Course learning outcomes]

Upon successful completion of this course, students will be able to:
1. Explain digital image representations (e.g., intensity levels and histograms), implement intensity transformations (negative/log/gamma/histogram operations) and binarization methods (fixed threshold/Otsu/adaptive), and explain differences in results.
2. Implement convolution-based smoothing, edge detection, and sharpening (e.g., Laplacian, unsharp masking, high-boost filtering), and analyze trade-offs between noise suppression and sharpness.
3. Apply morphological operations (e.g., opening/closing) for region masking and denoising, and explain method characteristics.
4. Visualize frequency spectra using FFT/IFFT and design/apply frequency-domain filters (LPF/HPF/BPF: Ideal/Butterworth/Gaussian). Explain side effects such as ringing, and evaluate results using metrics (e.g., PSNR/SSIM).
5. Use color space conversions (e.g., HSV) and multi-resolution features (e.g., wavelets) to extract line structures (e.g., Canny + Hough) and compare/interpret results.
6. Apply representative segmentation methods (thresholding, region growing, clustering, GrabCut, SOM, pre-trained models, etc.) and feature extraction methods (Harris/Shi-Tomasi/SIFT, template matching, BoVW, etc.), and evaluate appropriateness of method selection for targets and conditions.
7. Construct reproducible Python (OpenCV)-based pipelines and produce well-organized reports including results (images/figures/metrics) and concise discussions.

[Learning approach]
Students learn core concepts and representative methods through lectures, and then implement/visualize/compare them in exercises. Through the report assignments (A1–A7), students strengthen application skills via multi-method comparisons, pipeline design, evaluation, and discussion.
授業スケジュール
/Class schedule
[Class format]
- In each meeting, the concepts and methods introduced in the lecture (100 minutes) are followed by exercise (50 minutes) where students implement and minimally validate the methods.
- Out-of-class study time is allocated to reproduction/organization and report assignments, ensuring consistent learning effects.

[Expected out-of-class study time (per meeting)]
The standard out-of-class study time is 430 minutes per meeting, allocated as follows:
- Preparation (60 min): reading lecture slides + previewing exercise code/notebooks
- Review (70 min): re-running the exercise and organizing outputs
- Assignment work (150 min average): this is an average allocation; one report assignment corresponds to about two meetings (≈300 min total)
- Additional review (150 min): re-reading slides, reading reference books/papers, and drills for the final exam
Report assignments are designed as one assignment per two meetings, and a two-week submission window is provided. In-class materials (templates, example runs, evaluation viewpoints) are provided as scaffolding, so that learning outcomes can be achieved step-by-step:
implement in class → reproduce and organize out of class → apply in the assignment.

[Weekly plan (14 meetings)]
1. Introduction to Image Processing (roles of images, workflow, OpenCV introduction)
Exercise: environment check, image I/O, pixel-wise operations
2. Intensity Transform (histograms, binarization, log/gamma, intro to smoothing/enhancement)
Exercise: histograms, intensity transforms, thresholding
3. Convolution and Spatial Filter I (convolution, smoothing, edge filters, Laplacian)
Exercise: compare smoothing and edge filters
4. Convolution and Spatial Filter II (sharpening, unsharp/high-boost, understanding noise amplification)
Exercise: sharpening and pipeline comparison
5. Morphological Filter (dilation/erosion, opening/closing, mask generation, denoising)
Exercise: masking and salt-and-pepper denoising
6. Frequency Filtering I (FFT/DFT, spectra, centering, frequency-domain filtering framework)
Exercise: FFT/IFFT and spectrum visualization
7. Frequency Filtering II (LPF/HPF/BPF, ringing, frequency enhancement, PSNR/SSIM, etc.)
Exercise: filter comparison and metrics table
8. Color Image Processing (color models, HSV, channel decomposition and processing)
Exercise: HSV conversion, hue operation, V-channel processing
9. Wavelet and Other Transforms (Hough line detection, multi-resolution, DWT subband features)
Exercise: Canny+Hough vs Wavelet+Hough comparison
10. Image Segmentation I (thresholding, region growing, clustering, etc.)
Exercise: implement and compare basic methods
11. Image Segmentation II (GrabCut/GraphCut, SOM, ONNX, etc.)
Exercise: compare GrabCut, SOM, ONNX-based approach
12. Image Features and Extraction I (corner detection: Harris, Shi-Tomasi; invariance concepts)
Exercise: corner visualization and quantitative comparison
13. Image Features and Extraction II (SIFT, template matching, BoVW)
Exercise: SIFT visualization; template vs BoVW detection comparison
14. Image Compression (lossless/lossy, PNG/JPEG, compression vs degradation and evaluation)
Exercise: compression, reconstruction, and comparison

[Assignments (A1–A7)]
To consolidate understanding and application skills, the course assigns seven report assignments (A1–A7, 10 points each). Each assignment requires constructing an image processing pipeline, visualization of results, comparison/discussion, and code submission.
A1: Image Enhancement and Binarization (L1–L2)
Apply intensity transforms (negative/log/gamma) and thresholding methods (fixed T/Otsu/adaptive), and compare results with histograms and observations.
A2: Spatial Filtering and Image Enhancement (L3–L4)
Compare smoothing (Mean/Gaussian/Median) and sharpening (Laplacian/Unsharp/High-boost). Design two pipelines that suppress noise amplification and evaluate.
A3: Morphological Filter – Image Masking and Denoising (L5)
Generate region masks using morphology (finger) and compare two approaches for salt-and-pepper denoising + metallic region extraction (circuit): median-based vs morphology-based.
A4: Frequency Domain Filtering and Analysis (L6–L7)
Confirm reconstruction via FFT/IFFT, compare LPF/HPF/BPF (Ideal/Butterworth/Gaussian), visualize ringing via difference images, and evaluate using PSNR/RSNR/SSIM.
A5: Color-Based and Multi-Resolution Line Detection (L8–L9)
Perform hue transformation in HSV and line detection via Canny+Hough using the V channel. Compare with wavelet-based directional features (Haar DWT).
A6: Comparative Segmentation Methods (L10–L11)
Extract the foreground (horse) using six methods (thresholding/region growing/K-means/GrabCut/SOM/U²-Net ONNX). Compare completeness, background suppression, and tuning difficulty.
A7: Feature Extraction (L12–L13)
Using robotgroup.jpg and face.jpg, perform corner detection (Harris/Shi-Tomasi), SIFT visualization, template matching, and BoVW-based search; compare BoVW vs template matching.

[Preparation and review]
Before each class, students should read the distributed lecture materials and review the exercise notebook steps and code. After class, students should re-run the exercise, organize outputs, and summarize differences and reasons. Students must also complete the report assignments (A1–A7) by the specified deadlines.
Out-of-class study time is 430 minutes per meeting as a guideline, consisting of:
- Preparation 60 min (slides + code preview)
- Review 70 min (exercise reproduction + organization)
- Assignment 150 min (weekly assignment work; averaged)
- Additional review 150 min (deep reading of slides, references/papers, final-exam drills)
教科書
/Textbook(s)
[Textbook(s)]
No required textbook. Lecture slides, exercise notebooks, and supplementary materials will be distributed via ELMS.

[References (recommended; not required to purchase)]
- R. C. Gonzalez and R. E. Woods, Digital Image Processing, 4th ed., Pearson, 2017.
- CG-ARTS Association, Digital Image Processing (Revised 2nd ed.), 2024.
(Recommended for review and further study.)
成績評価の方法・基準
/Grading method/criteria
- Report assignments (A1–A7): 70 points (10 points × 7)
- Final exam: 30 points
- Total: 100 points
Attendance is confirmed via in-class mini tasks/submissions (not counted in the grade).
履修上の留意点
/Note for course registration
- Materials: slides, notebooks, and supplementary documents will be distributed via ELMS.
- Submission: report assignments (A1–A7) are submitted as PDF via ELMS. Deadlines are announced for each assignment (guideline: one assignment per two lectures; two-week window).
- Late submission: if the first submission is past the deadline, -4 points (minimum 0) will be applied unless prior consultation is approved for valid reasons (illness, job hunting, official events, etc.).
- Absence/late arrival: prior notice is strongly recommended when absence is expected.
- Remote teaching: if the instructor is away for conferences, the class may be delivered online (e.g., Zoom). Students are still required to attend the designated classroom and have attendance verified by a substitute instructor or TA.
参考(授業ホームページ、図書など)
/Reference (course
website, literature, etc.)
- Practical experience: the instructor has experience in R&D and real-world applications of image processing and recognition (e.g., facility monitoring, visual measurement, robot vision). Implementation pitfalls and evaluation viewpoints for practical deployment will be introduced.


Open Competency Codes Table Back

開講学期
/Semester
2026年度/Academic Year  1学期 /First Quarter
対象学年
/Course for;
4th year
単位数
/Credits
3.0
責任者
/Coordinator
NARUSE Keitaro
担当教員名
/Instructor
NARUSE Keitaro
推奨トラック
/Recommended track
先修科目
/Essential courses
更新日/Last updated on 2026/02/06
授業の概要
/Course outline
In the modern society, computer engineers should understand basic theory on robots and control theory, because computers are introduced in many robots and control devices. This course gives fundamental knowledge on them to computer science and engineering major students. In the control theory, the students study about the concept of feedback control and related theory and method. On the other hand, because robots work in the real world, the students study how we should model and represent the world in computers and how a robot should make a plan with them. The students will learn them with a series of exercises for understanding the topics deeper.
授業の目的と到達目標
/Objectives and attainment
goals
[Corresponding Learning Outcomes]
(A)Graduates are aware of their professional and ethical responsibilities as an engineer, and are able to analyze societal requirements, and set, solve, and evaluate technical problems using information science technologies in society.

[Competency Codes]
C-AL-002-1, C-AL-005-1

For robotics part, the students will learn basic theory and methods for representing robot motion mathematically as well as a planning method for robots. The students will learn
(A1) Configuration space method: we can represent robots and objects in computer.
(A2) Planning method such as artificial potential method, road map method, cell decomposition method: we can be make a plan for robots

On the other hand, for the automatic control theory, the students will learn basic fundamental knowledge on feedback control, which includes
(B1) Transfer functions and block diagrams: we can model a dynamical system.
(B2) Stability and steady state error: we can understand how control system works.
(B3) PID control system: we can design a controller for a target system.
授業スケジュール
/Class schedule
Each class will be conducted with the first two periods in lecture format and the remaining period in exercise format.

#1: Overview and introduction
#2: Configuration space for circular robot
#3: Configuration space for rectangular robot
#4: Artificial potential method
#5: Road map method
#6: Cell decomposition method
#7: Sampling based planning
#8: Robot equations of motion
#9: Principle of feedback control
#10: Steady state error
#11: Stability of control system
#12: PID control
#13: Advantage of feedback control
#14: Summary

Preparation/Review]
Preparation: Before each class, prepare by studying the lecture materials as well as implementing sample codes for the content indicated in the course plan.
Review: Complete any unfinished exercises until the next class, as well as extra probelms and analysis shown in classes.

The typical preparation/review time per session is 6–7 hours.
教科書
/Textbook(s)
None.
Related materials are distributed in a course ware.
成績評価の方法・基準
/Grading method/criteria
Quiz: 20%
Excercise: 20%
Final exam: 60%
履修上の留意点
/Note for course registration
As related courses, the students are expected to understand programming languages, linear system, and electrical circuits.
参考(授業ホームページ、図書など)
/Reference (course
website, literature, etc.)
LMS


Open Competency Codes Table Back

開講学期
/Semester
2026年度/Academic Year  1学期 /First Quarter
対象学年
/Course for;
4th year
単位数
/Credits
3.0
責任者
/Coordinator
NASSANI Alaeddin
担当教員名
/Instructor
NASSANI Alaeddin
推奨トラック
/Recommended track
先修科目
/Essential courses
No prerequisites beyond basic programming courses.
However, these courses are recommended:
  LI10: Intro. to Multimedia Systems
  IT02: Computer Graphics
更新日/Last updated on 2026/02/04
授業の概要
/Course outline
This course provides a comprehensive introduction to the field of interactive technologies, with a primary focus on Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR). Students will explore fundamental principles of Human-Computer Interaction (HCI) and 3D User Interface (UI) design while gaining practical skills in industry-standard software.

We will covers the development pipeline using Unity. Core topics include 3D modeling, physics simulation, sound design, and game engine programming. We will explore VR interaction, tracking, and collaborative MR.

Through practical hands-on exercises, students will create their own immersive virtual worlds. In the final group project, students design, develop, and implement a fully interactive XR experience.
授業の目的と到達目標
/Objectives and attainment
goals
[Corresponding Learning Outcomes]
(A) Graduates are aware of their professional and ethical responsibilities as an engineer, and are able to analyze societal requirements and set, solve, and evaluate technical problems using information science technologies in society.

(C) Graduates are able to apply their professional knowledge of mathematics, natural science, and information technology, as well as the scientific thinking skills such as logical thinking and objective judgment developed through the acquisition of said knowledge, towards problem solving.

[Competency Codes]
C-GV-001, C-HI-005

By the end of the course, students should be able to: design, develop, and implement their own interactive Virtual Reality (VR), Augmented Reality (AR), or Mixed Reality (MR) experiences, demonstrating proficiency in using industry-standard tools like Unity, Blender, and Audacity, while applying principles of Human-Computer Interaction (HCI) and 3D User Interface (UI) design to create engaging and immersive virtual worlds, games, or simulations
授業スケジュール
/Class schedule
[Course Content and Methods]
- Lectures & Exercises: Meetings 1–14 combine theory with hands-on exercises sessions using tools like Unity, Blender, and Game Engines.

- Group Work: Students will collaborate on the design and development of the Final Project.

[Class Schedule]
Meeting 1: Overview & 2D Intro
Meeting 2: 3D Intro & Blender
Meeting 3: Game Engines & Unity
Meeting 4: Physics Simulation
Meeting 5: Sound in Games
Meeting 6: Unity Programming
Meeting 7: HMD & VR Intro
Meeting 8: VR Interaction
Meeting 9: VR Events & Haptics
Meeting 10: VR UI
Meeting 11: Embodied Avatars
Meeting 12: Tracking
Meeting 13: Sensing
Meeting 14: Collaborative MR
Meeting 15: Final project

[Pre-class and Post-class Learning]
- Preparation: Review lecture notes and install necessary software assets/packages prior to the session.

- Review/Assignments: Complete the checkpoint exercises to validate skills and make steady progress on the Group Project.

- Out-of-class study time: 6-7 hr per session. (calculated based on 3 credits=135hr total learning time)
教科書
/Textbook(s)
Lecture notes prepared by instructors, TAs, & SAs.

The Vr Book: Human-centered Design for Virtual Reality
Jason Jerald
978-1970001129

Research Methods in Human-Computer Interaction
Jonathan Lazar
978-0470723371
成績評価の方法・基準
/Grading method/criteria
The majority of coursework in this course revolves around hands-on lab exercises, prioritizing the creative utilization of digital content creation tools to foster design innovation and originality. Exercises serve to validate specific skills to create engaging and interactive VR/AR experiences. Regular feedback will be provided throughout the course to guide students and help them refine their work.

The course assessment breakdown is as follows:
Quizes: 25%
Assignments: 35%
Group Project: 40%

These assessments ensure comprehensive evaluation of student learning and achievement throughout the course.
参考(授業ホームページ、図書など)
/Reference (course
website, literature, etc.)
Photopea image editor: https://www.photopea.com
Blender: https://www.blender.org
Unity: https://unity.com
Unity tutorials: https://learn.unity.com/tutorials
Google Cardboard: https://arvr.google.com/cardboard/apps/, https://developers.google.com/cardboard
Audacity audio editor: https://www.audacityteam.org/
macOS "say" TTS (text-to-speech) utility: https://support.apple.com/en-is/guide/mac-help/mh27448/mac
GarageBand: https://www.apple.com/mac/garageband/
Vuforia: https://developer.vuforia.com/
Snap AR & Lens Studio https://ar.snap.com/
ZIG SIM https://zig-project.com/
HCI Research https://www.sciencedirect.com/book/9780128053904/research-methods-in-human-computer-interaction

The course instructor has experience in VR and AR including development, research and teaching.


Open Competency Codes Table Back

開講学期
/Semester
2026年度/Academic Year  1学期 /First Quarter
対象学年
/Course for;
3rd year
単位数
/Credits
4.0
責任者
/Coordinator
CHEN Wenxi
担当教員名
/Instructor
CHEN Wenxi, TRUONG Cong Thang, WANG Zhishang
推奨トラック
/Recommended track
先修科目
/Essential courses
更新日/Last updated on 2026/01/21
授業の概要
/Course outline
Signals and systems are present in any aspects of our world. The examples of signals are speech, audio, image and video signals in consumer electronics such as TV, PC, and smartphone; vital signs in medical systems; electronic radar waveforms in military equipment. Signal processing is concerned with the representation, transformation and manipulation of signals, and extraction of the significant information contained in signals. For example, we may wish to remove the noise in speech signals to make them clear, or to enhance an image to make it more natural. Signal processing is one of the fundamental theories and techniques to construct modern information systems. During the last century, lots of theories and methods have been proposed and widely studied in signal processing. This course includes the concept of continuous-time and discrete-time signals, representations of signals in time, frequency, and other transform domains, representations and analyses of systems, filter structures and designs.
The course is a prerequisite course for your further studying on other related courses, such as voice processing, image processing, audio and video signal compressing, pattern recognition and classification, biomedical signal processing, development of communication and security systems, and so forth.
授業の目的と到達目標
/Objectives and attainment
goals
[Corresponding Learning Outcomes]
(A) Graduates are aware of their professional and ethical responsibilities as an engineer, and are able to analyze societal requirements, and set, solve, and evaluate technical problems using information science technologies in society.
(B) This course is to provide students with the foundations and tools of signal processing, particularly the time-invariant system in both continuous and discrete domains. We will mainly study the following topics: signal representation in time domain, Fourier transform, sampling theorem, linear time-invariant system, discrete convolution, z-transform, discrete Fourier transform, and discrete filter design.
After this course, the students should be able to understand how to analyze a given signal or system using various transforms; how to process signals to make them more useful; and how to design a signal processor (digital filter) for a given problem.
授業スケジュール
/Class schedule
Prof. TRUONG’s Class
1. Introduction to Signals and Systems
2. Linear Time-Invariant System (continuous-time)
3. Linear Time-Invariant System (discrete-time)
4. Continuous Fourier Series and Fourier Transform
5. Discrete Fourier Series, Fourier Transform, and FFT
6. Fourier Transform Analysis of Signals and Systems
7. Midterm exam
8. Laplace Transform
9. Z-Transform
10. Structures for Digital Filters I: FIR Filter
11. Digital Filter Design I: FIR Filter
12. Structures for Digital Filters II: IIR Filter
13. Digital Filter Design II: IIR Filter
14. Applications of Signal Processing

Prof. CHEN and WANG’s Class
1. Introduction to Signals and Systems
2. Linear Time-Invariant System (continuous-time)
3. Linear Time-Invariant System (discrete-time)
4. Continuous Fourier Series and Fourier Transform
5. Discrete Fourier Series, Fourier Transform, and FFT
6. Fourier Transform Analysis of Signals and Systems
7. Laplace Transform
8. Z-Transform
9. Structures for Digital Filters I: FIR Filter
10. Digital Filter Design I: FIR Filter
11. Structures for Digital Filters II: IIR Filter
12. Digital Filter Design II: IIR Filter
13. Applications of Signal Processing
14. Review
教科書
/Textbook(s)
Textbooks:
1. Schaum's Outline of Signals and Systems, (Schaum's Outlines) 2019/10/16 Hsu, Hwei P., 3835 Yen
2. Schaum’s Outline of Digital Signal Processing, (Schaum's Outlines) 2011/9/7 Hayes, Monson H., 3658 Yen

Reference books:
1. Digital Signal Processing: A Computer-Based Approach 2010/9/10, Sanjit K. Mitra, 20106 Yen
2. ディジタル信号処理(第2版・新装版)–2020/12/17, 萩原 将文  (著)、森北出版、2420 円
3. MATLAB対応 ディジタル信号処理(第2版)–2021/11/12, 川又政征 (著), 阿部正英 (著), 八巻俊輔 (著), 樋口龍雄 (監修)、森北出版、3630 円
成績評価の方法・基準
/Grading method/criteria
Prof. TRUONG’s Class
* Quizzes: 10%
* Exercises: 40%
* Mid-term exam: 20%
* Final exam: 30%

Prof. CHEN and WANG’s Class
* Quizzes: 10%
* Exercises: 40%
* Final exam: 50%
履修上の留意点
/Note for course registration
None
参考(授業ホームページ、図書など)
/Reference (course
website, literature, etc.)
1. Instructors
The course instructor Wenxi Chen has practical working experience. He worked for Nihon Kohden Industrial Corp. for 5 years where he was involved in R&D of bioinstrumentation, signal processing and data analysis. Based on his experience, he can teach the basis of signal processing and linear systems.

The course instructor Cong Thang Truong has practical working experiences. He worked for Electronics and Telecommunications Research Institute (ETRI) of South Korea for 5 years where he was involved in R&D of multimedia signal processing and communications. He also actively contributed in ISO/IEC & ITU-T standards of signals and systems for more than 10 years. Based on his experiences, he can teach the basics of signal processing and linear systems.

The course instructor Zhishang Wang has practical working experience. He had performed computational techniques for analyzing and processing image data at University of Freiburg for 3 years and had performed data processing and analysis at the University of Aizu for 5 years. He had also conducted research on data analysis in collaboration with Rexev Corp. Based on his experiences, he can teach the basics of signal processing and linear systems.

2. MOODLE for Handouts, Quizzes and Exercises
https://elms.u-aizu.ac.jp/login/index.php

3. MIT OpenCourseWare "Signals and Systems"
https://ocw.mit.edu/resources/res-6-007-signals-and-systems-spring-2011/index.htm


Open Competency Codes Table Back

開講学期
/Semester
2026年度/Academic Year  1学期 /First Quarter
対象学年
/Course for;
4th year
単位数
/Credits
3.0
責任者
/Coordinator
VILLEGAS OROZCO Julian Alberto
担当教員名
/Instructor
VILLEGAS OROZCO Julian Alberto, NASSANI Alaeddin
推奨トラック
/Recommended track
先修科目
/Essential courses
LI10 Introduction to Multimedia Systems
更新日/Last updated on 2026/01/29
授業の概要
/Course outline
Hearing is arguably the second most important sensory modality, and it is sometimes preferable to vision for displaying and acquiring information. For example, a car navigation system delivers guidance using speech, or you verbally ask your mobile phone to dial a number.

In this course, we briefly review the main characteristics of sound, audio, and their processing for human-computer interaction. The purpose of this course is twofold:

   1. To learn techniques for extracting information from acoustic signals.
   2. To use acoustic signals to display information.
授業の目的と到達目標
/Objectives and attainment
goals
[Corresponding Learning Outcomes]
(A)Graduates are aware of their professional and ethical responsibilities as an engineer, and are able to analyze societal requirements, and set, solve, and evaluate technical problems using information science technologies in society.

[Competency Codes]
C-GV-001, C-GV-002

- Students will develop the ability to extract meaningful information from acoustic signals for use in various applications, including speech processing, music analysis, and environmental sound recognition.

- Students will learn to apply acoustic signal processing techniques to explore and analyze large datasets, leveraging audio as a tool for big data applications.

- By the end of the course, students will be able to evaluate and select the most appropriate audio processing techniques based on application constraints such as real-time performance, computational efficiency, and data accuracy.
授業スケジュール
/Class schedule
1 Course overview, introduction to Pure-data
2 Physics of sound
3 Sound waves and rooms
4 Sound perception
5 Sound perception (continuation)
6 Audio signals
7 Audio signals (continuation)
8  Electroacoustics, human voice, and musical instruments
9 Time-frequency processing of audio
10 Digital filters & audio effects
11 Spatial hearing
12 Sonification
13 Speech technologies (synthesis)
14 Speech technologies (recognition)
教科書
/Textbook(s)
• V. Pulkki and M. Karjalainen, Communication acoustics: an introduction to speech, audio and psychoacoustics. John Wiley & Sons, 2015.
• T. Hermann, A. Hunt, and J. G. Neuhoff, The sonification handbook. Logos Verlag Berlin, 2011.
• W. M. Hartmann, Signals, Sound, and Sensation. Modern acoustics and signal processing, Wood- bury, NY; USA: American Institute of Physics, 1997.
• Various materials prepared by the instructor
成績評価の方法・基準
/Grading method/criteria
Exercises 40%
Quizzes 30%
Final exam 30%
履修上の留意点
/Note for course registration
This course is offered in English.
参考(授業ホームページ、図書など)
/Reference (course
website, literature, etc.)
The course instructor has practical working experience. He worked as an Ikerbasque researcher for about three years at the laboratory of phonetics in the Basque Country University.

Class: Lecture followed by Exercises


Open Competency Codes Table Back

開講学期
/Semester
2026年度/Academic Year  1学期 /First Quarter
対象学年
/Course for;
3rd year
単位数
/Credits
3.0
責任者
/Coordinator
HONDA Chikatoshi
担当教員名
/Instructor
HONDA Chikatoshi, TAKAHASHI Shigeo
推奨トラック
/Recommended track
先修科目
/Essential courses
MA01 Linear Algebra I, MA02 Linear Algebra II, FU01 Algorithms and Data Structures I, FU03 Discrete Systems
更新日/Last updated on 2026/02/06
授業の概要
/Course outline
Computational geometry is a key field of computer science that deals with the solution of geometric problems. In recent years, there has been an increasing demand for sophisticated solutions to geometric problems involving large data sets, as well as the ability to process these data sets at high speeds. These solutions are essential for a range of fields, including geographic information systems (GIS), computational graphics (CG), computer-aided design (CAD), pattern recognition, robotics, and more.
In the first half of the class, students will learn about computational geometric concepts. In the second half, they will learn about information visualization based on various concepts and algorithms.
授業の目的と到達目標
/Objectives and attainment
goals
To understand basic concepts and algorithms of computational geometry and students will be able to apply it for specific problems.
授業スケジュール
/Class schedule
The following lecture topics will generally be conducted in a two-period lecture format (though the duration may vary depending on the content).
During the first three exercise sessions, students will tackle problem-solving tasks using Voronoi diagrams. The focus is not on the making of the diagram itself, but rather on the problem-solving process, from problem definition to result interpretation. Students are expected to be able to logically explain their own reasoning (explainability). The results of the exercises will be compiled into presentation materials and submitted.
The remaining four sessions will focus on the following visualization themes: multidimensional data visualizations, tree structure visualizations, network structure visualizations, and text content visualizations, corresponding to the lecture content. The learning process will be conducted in a tutorial format. For these later sessions, students are expected to solve specific problems presented during the exercises, both during the exercise sessions and outside of scheduled exercise time.

1, First half (Computational geometry)
1-1, Lecture content
Introduction -Examples of applications-
Line segment intersections
Voronoi diagrams
Convex hulls
Delaunay triangulations
Polygon triangulation

1-2, Practical exercise content
Problem-Solving Exercises Using Voronoi Diagrams (Exercise 1-3)

2, Second half (Data visualization representing geometric relationships)
2-1, Lecture content
Scatterplot matrices
Parallel coordinate plots
Tree diagrams
Treemaps
Node-link diagrams
Adjacency matrices
Text and document visualization
Presentation and discussion of a first-half assignment

2-2, Exercise content
Visualization of multidimensional data (Exercise 4)
Visualization of tree structures (Exercise 5)
Visualization of network structures (Exercise 6)
Visualization of text content (Exercise 7)

Lecture preparation: 1h
Lecture review: 2-4h
Exercise assignments: 1-4h
教科書
/Textbook(s)
Prepared handouts prepared in advance
成績評価の方法・基準
/Grading method/criteria
The scores for the first and second half assignments conducted during the exercises will each account for 25% of the overall grade. The remaining 50% will be the final exam score, totaling 100 points.
参考(授業ホームページ、図書など)
/Reference (course
website, literature, etc.)
Computational Geometry: Algorithms and Applications, 3rd edition
M. de Berg, others,
Springer, 2008

Information Visualization: An Introduction, 3rd edition
Robert Spence
Springer, 2014


Open Competency Codes Table Back

開講学期
/Semester
2026年度/Academic Year  4学期 /Fourth Quarter
対象学年
/Course for;
3rd year
単位数
/Credits
3.0
責任者
/Coordinator
YAGUCHI Yuichi
担当教員名
/Instructor
YAGUCHI Yuichi, PAIK Incheon
推奨トラック
/Recommended track
先修科目
/Essential courses
更新日/Last updated on 2026/02/05
授業の概要
/Course outline
Understanding natural language is a core enabling technology for human–machine communication and for exploiting massive text information such as search, summarization, extraction, and classification. This course studies a unified pipeline for processing natural language documents available on the Web and elsewhere, covering text acquisition and preprocessing, feature representation, classification and clustering, information extraction, syntactic parsing, and finally an information retrieval (IR) system including indexing, ranking, and evaluation.

In the exercise sessions, students implement and apply representative algorithms using Python (including Google Colab) and libraries such as NLTK. Through running experiments, comparing outputs, tuning parameters, and writing concise discussions, students deepen both conceptual understanding and practical ability to design and evaluate text/IR pipelines.
授業の目的と到達目標
/Objectives and attainment
goals
[Program learning outcomes]
(A) Recognize professional and ethical responsibility as an engineer, analyze societal requirements using information science and technology, and define/solve/evaluate technical problems.

[Competency codes]
C-IS-001, C-IS-002-1, C-IS-003, C-IS-004-2, C-IS-005-1, C-PL-003, C-PL-005, C-PL-007

[Course learning outcomes]
Upon successful completion of this course, students will be able to:
- Acquire text data and perform preprocessing (normalization, tokenization/word segmentation, stopwords handling, etc.).
- Build document representations (e.g., BoW/TF-IDF) and apply them to similarity computation, dimensionality reduction, and classification/clustering.
- Extract structured information from documents via information extraction (chunking, named entities, etc.) and parsing (CFG/dependency).
- Explain the basic architecture of an IR system (indexing, ranking, evaluation metrics) and implement/evaluate a simple search function.
授業スケジュール
/Class schedule
[Class format]
Each class meeting consists of two periods of lecture (100 minutes total) followed by one period of exercise (50 minutes total). Lectures cover theory, representative methods, and evaluation viewpoints. Exercises implement and apply the methods using Python (including Google Colab) and NLTK (and related libraries), followed by comparison and discussion of results. When appropriate, practical feedback and assignment reviews are provided at milestones.

[Preparation and review]
- Before class: read the relevant lecture slides and the corresponding sections of the exercise notebook.
- After class: re-run the exercises, modify parameters, interpret results, and confirm understanding.

The standard out-of-class study time is 430 minutes per meeting, allocated as follows:
- Preparation 60 min: reading slides + reading exercise code/notebooks
- Review 70 min: reproducing exercises and organizing outputs
- Assignment work 150 min (average): weekly assignment work (average allocation)
- Additional review 150 min: deep reading of slides, references/papers, and drills for the final exam

[Report Assignments]
Report assignments (A1–A7) are designed as one assignment per two class meetings.
- Expected workload: 300 minutes per assignment (= 150 min × 2 meetings equivalent), including implementation, comparison, evaluation, discussion, and report writing.
- Submission window: two weeks as a standard rule.

[Weekly plan]

Week 1
- Lecture: Introduction to Information Retrieval and NLP (What is IR? Role of NLP; end-to-end pipeline; exercise environment)
- Exercise: Colab/Python setup, NLTK installation, sanity-check execution of basic text processing

Week 2
- Lecture: Text Acquisition and Corpus Basics (character encoding, Web acquisition, HTML/structured text, corpus and annotation)
- Exercise: raw text acquisition; introduction to preprocessing (encoding handling, HTML stripping, etc.)

Week 3
- Lecture: Tokenization and Normalization (sentence/word tokenization, regex, stopwords, stemming/lemmatizing)
- Exercise: compare tokenizers; build a normalization pipeline

Week 4
- Lecture: Language Model Basics (collocation/concordance, n-gram, probabilistic perspective)
- Exercise: bigram/trigram counting; observing simple language model behavior

Week 5
- Lecture: POS Tagging and Taggers (sequence labeling, unknown word problem, rule-based vs statistical approaches, evaluation)
- Exercise: build and evaluate taggers; error analysis

Week 6
- Lecture: Word Sense Disambiguation (feature design, Naive Bayes, probability/likelihood; advanced concept: HMM overview)
- Exercise: baseline WSD classifier; feature comparison

Week 7
- Lecture: Document Representation (one-hot, BoW, TF-IDF, similarity, sparse matrices)
- Exercise: compute TF-IDF; similarity computation; simple ranking

Week 8
- Lecture: Dimensionality Reduction and Distributed Representations (PCA/SVD/LSA concepts; overview of word2vec)
- Exercise: visualize dimensionality reduction; compare representations

Week 9
- Lecture: Document Classification & Clustering (k-means, linear classifiers, Naive Bayes, evaluation metrics)
- Exercise: comparative experiments for classification vs clustering

Week 10
- Lecture: Information Extraction (chunking/chinking, NER, relations, metadata extraction)
- Exercise: implement an IE pipeline; evaluate extracted results

Week 11
- Lecture: Syntax & Parsing (phrase structure and CFG; dependency structures; parsing concepts)
- Exercise: run CFG/dependency parsers and analyze outputs

Week 12
- Lecture: Information Retrieval I (IR system architecture, indexing, ranking: TF-IDF/BM25; evaluation: P/R/F1, MAP, NDCG)
- Exercise: implement simple IR (indexing → search → evaluation)

Week 13
- Lecture: Information Retrieval II (link analysis: PageRank/HITS; scale issues, meta-search, operational considerations)
- Exercise: compare ranking methods; evaluation design

Week 14
- Lecture: Japanese / Multibyte NLP & IR (Japanese resources, morphological analysis, agglutinative language properties, Japanese search issues, applications)
- Exercise: preprocessing/representation/search on Japanese data; wrap-up

Final exam: Conducted during the official examination period (not counted as one of the 14 class meetings). The exam includes knowledge questions and algorithmic/problem-solving questions.

[Assignments]
Assignments aim to consolidate understanding and practical application. Each assignment requires a PDF report and code (Colab/Notebook or equivalent).
Common rule: two-week submission window for each assignment.
A1 (L1–L2): Python/NLTK introduction + text acquisition and preprocessing (10 pts)
A2 (L3–L4): Tokenization/normalization + n-gram observation (10 pts)
A3 (L5–L6): Tagging/WSD (classifier + feature design) (10 pts)
A4 (L7–L8): Document representation + dimensionality reduction (TF-IDF/LSA, etc.) (10 pts)
A5 (L9): Classification vs clustering comparison (single-topic) (10 pts)
A6 (L10–L11): Information extraction + syntax/parsing (IE/Parsing) (10 pts)
A7 (L12–L13): Integrated simple IR system (indexing → search → evaluation) (10 pts)
教科書
/Textbook(s)
Steven Bird, Ewan Klein, Edward Loper,
Natural Language Processing with Python: Analyzing Text with the Natural Language Toolkit (O’Reilly)
(Japanese edition: 『入門 自然言語処理』, O’Reilly Japan)
成績評価の方法・基準
/Grading method/criteria
Assignments (A1–A7): 70 points (10 points × 7)
Final exam: 30 points
Total: 100 points

Late submission rule: if the first submission is not made within the submission period, -4 points are deducted (minimum 0).
履修上の留意点
/Note for course registration
Attendance includes both lecture and exercise sessions.
If students notify the instructor in advance about an expected absence, late submission handling may be considered depending on circumstances (illness, job hunting, official events, etc.).
If the instructor is away for international conferences, the class may be delivered online (e.g., Zoom). In such cases, students are still required to attend the designated classroom and have attendance verified by a substitute instructor or TA.
参考(授業ホームページ、図書など)
/Reference (course
website, literature, etc.)
Office hours: arranged after class or individually via ELMS/email.
NLTK: https://www.nltk.org/
NLTK Book: https://www.nltk.org/book/
Basics of Information Retrieval (Kyoritsu Shuppan)


Responsibility for the wording of this article lies with Student Affairs Division (Academic Affairs Section).

E-mail Address: sad-aas@u-aizu.ac.jp