![]() |
Shaddin DughmiAssociate Professor
Short Bio: Shaddin Dughmi is an Associate Professor in the Department of Computer Science at USC, where he is a member of the Theory Group. He received a B.S. in computer science, summa cum laude, from Cornell University in 2004, and a Ph.D. in computer science from Stanford University in 2011. He is a recipient of the NSF CAREER award, the Arthur L. Samuel best doctoral thesis award, and the ACM EC best student paper award. Research interests: I am broadly interested in questions that stimulate the development of new algorithmic techniques, and shed insight on the power and limitations of algorithms. I have investigated such questions in a variety of domains: game theory, mechanism design, multi-agent systems, persuasion and information design, delegation and contract theory, decision making subject to online or stochastic uncertainty, and the theory of machine learning. Email: "first name"@usc.edu
Research Papers | Misc Writings | Teaching | Students | Other |
Local Regularizers are Not Transductive Learners
Sky Jafar, Julian Asilis, and Shaddin Dughmi.
In submission.
PAC Learning is just Bipartite Matching (Sort of)
Shaddin Dughmi.
Position paper.
From Contention Resolution to Matroid Secretary and Back
Shaddin Dughmi.
SIAM Journal on Computing (SICOMP) 2025 (to appear).
Proper Learnability and the Role of Unlabeled Data.
Julian Asilis, Siddartha Devic, Shaddin Dughmi, Vatsal Sharan, and Shang-Hua Teng.
ALT 2025 (to appear).
Is Transductive Learning Equivalent to PAC Learning?
Shaddin Dughmi, Yusuf Hakan Kalayci, and Grayson York.
ALT 2025 (to appear).
Efficient Multi-agent Delegated Search
Curtis Bechtel and Shaddin Dughmi.
AAMAS 2025 (to appear).
Transductive Learning is Compact.
Julian Asilis, Siddartha Devic, Shaddin Dughmi, Vatsal Sharan, and Shang-Hua Teng.
NeurIPS 2024.
Open Problem: Can Local Regularization Learn All Multiclass Problems?
Julian Asilis, Siddartha Devic, Shaddin Dughmi, Vatsal Sharan, and Shang-Hua Teng.
COLT (open problems track) 2024.
Regularization and Optimal Multiclass Learning.
Julian Asilis, Siddartha Devic, Shaddin Dughmi, Vatsal Sharan, and Shang-Hua Teng.
COLT 2024.
Limitations of Stochastic Selection with Pairwise Independent Priors.
Shaddin Dughmi, Yusuf Hakan Kalayci, and Neel Patel.
STOC 2024.
On Supermodular Contracts and Dense Subgraphs.
Ramiro Deo-Campo Vuong, Shaddin Dughmi, Neel Patel, and Aditya Prasad.
SODA 2024.
On Sparsification of Stochastic Packing Problems.
Shaddin Dughmi, Yusuf Hakan Kalayci, and Neel Patel.
ICALP 2023.