Alina Oprea
Professor

Research interests
- Security analytics
- Cloud security
- Network security
- Applied cryptography
Education
- PhD in Computer Science, Carnegie Mellon University
- MS in Computer Science, Carnegie Mellon University
- BS in Mathematics and Computer Science, University of Bucharest — Romania
Biography
Alina Oprea is a professor in the Khoury College of Computer Sciences at Northeastern University, based in Boston.
Oprea is interested in extracting meaningful intelligence from different data sources for security applications. By designing rigorous machine learning techniques to predict the behavior of sophisticated attackers, she hopes to protect cloud infrastructures against emerging threats. Oprea co-directs the Network and Distributed Systems Security Lab, which focuses on building distributed systems and network protocols that achieve security, availability, and performance.
Before joining Khoury College, Oprea was a research scientist at RSA Laboratories, where she studied cloud security, applied cryptography, foundations of cybersecurity, and security analytics.
As the co-author of numerous journal and peer-review conference papers, Oprea has participated in many technical program committees — including IEEE S&P, NDSS, ACM CCS, ACSAC, and DSN — and is a co-inventor on 20 patents. She is an associate editor for the ACM Transactions on Privacy and Security journal. At the 2005 Network and Distributed System Security Conference, Oprea earned the Best Paper Award, and in 2011, she received the Technology Review TR35 award for her research in cloud security.
Recent publications
-
[TEST-FEB13]-User Inference Attacks on Large Language Models
Citation: Nikhil Kandpal, Krishna Pillutla, Alina Oprea, Peter Kairouz, Christopher A. Choquette-Choo, Zheng Xu . (2024). User Inference Attacks on Large Language Models EMNLP, 18238-18265. https://aclanthology.org/2024.emnlp-main.1014 -
TEST Chameleon: Increasing Label-Only Membership Leakage with Adaptive Poisoning
Citation: Harsh Chaudhari, Giorgio Severi, Alina Oprea, Jonathan R. Ullman. (2024). Chameleon: Increasing Label-Only Membership Leakage with Adaptive Poisoning ICLR. https://openreview.net/forum?id=4DoSULcfG6 -
[TEST-FEB13]-SleeperNets: Universal Backdoor Poisoning Attacks Against Reinforcement Learning Agents
Citation: Ethan Rathbun, Christopher Amato, Alina Oprea. (2024). SleeperNets: Universal Backdoor Poisoning Attacks Against Reinforcement Learning Agents NeurIPS. http://papers.nips.cc/paper_files/paper/2024/hash/cb03b5108f1c3a38c990ef0b45bc8b31-Abstract-Conference.html