CSCI 4962/6962 Security and Privacy of Machine Learning (2023 Spring, RPI)
Published:
Tentative Syllabus
Office: MRC 330B
Office Hour: Fri 2:30PM-3:30PM
Classtime and location: Tue/Fri 12:00-1:50 PM in SAGE 4203
1. Course Description:
Machine learning (ML) has demonstrated superior performance in many areas and has increasingly been deployed in real-world critical applications. However, ML models’ vulnerabilities and privacy risks could put public safety and user privacy in danger. Existing studies have shown that ML applications such as financial analytics and autonomous vehicles are vulnerable to attacks that can manipulate the models to their malicious ends.
This course will introduce potential vulnerabilities of ML models, recent research, and future directions about the security and privacy problems in real-world machine learning systems. The objectives of the course are the following:
- Provide an in-depth overview of different types of attacks against computer systems leveraging ML as well as defense techniques.
- Discuss adversarial attacks in real-world applications, including cyber security, autonomous vehicles, etc.
- Understand the privacy risks of ML models, the concept of differential privacy, and its application in privacy-preserving ML
- Understand the robustness and fairness of ML.
- Students will familiarize themselves with the emerging body of literature on each topic, understand different algorithms, analyze security vulnerabilities, and develop the ability to conduct research projects on related topics.
2. Prerequisites:
The courses CSCI 4150 - Introduction to Artificial Intelligence or CSCI 4100 - Machine Learning from Data are recommended prerequisites. Or, you have taken security-related courses such as ITWS 4370 - Information System Security and are willing to learn some foundational machine learning materials on your own. But note, machine learning courses are not hard prerequisites if you have already learned about foundational knowledge of machine learning, such as gradient descent, linear regression neural network etc.
3. Course Format(Tentative):
Each student will present 3-4 papers and lead the discussion in class on a specific topic related to this course. Topics and related papers will be announced soon.
Each student will choose a paper from the reading list (not in your presentation pile) for the class day and write a 1-page summary, which summarizes the paper’s motivation, research problem, and key contribution and lists the strengths and weaknesses of the work. The student speakers do not need to write the summary on their presentation day. The summary should be submitted before the class.
For the final projects, the students work in groups of 1 or 2 sizes on a topic related to this course. Example topics for the final projects can be but are not limited to 1. Implement attacks against real-world ML systems or general/novel ML models. 2. Improve attacks/ defenses algorithms in papers with your own methods 3. Benchmark the robustness of existing ML models and conduct a comparative study 4. Literature survey on particular ML security and privacy topics not covered in the course.
4. Grading Policy:
Paper summaries: 15%
Paper presentation: 20%
Project: 60% (5% proposal + 40% final deliverable + 15%presentation)
Attendance: 5%
5. Academic Integrity:
The Rensselaer Handbook of Student Rights and Responsibilities defines various forms of Academic Dishonesty, and you should familiarize yourself with these.
Paper summaries should be done individually.
Don’t directly copy code from the internet.