Caglar Oksuz

Logo

View My GitHub Profile

Caglar Oksuz

Welcome to my personal academic webpage!

I am a Ph.D. candidate in Computer Science at Case Western Reserve University, with a strong focus on machine learning security, privacy, and adversarial robustness. My research centers around black-box threats against machine learning models such as model extraction and label-only membership inference attacks, particularly under realistic, resource-constrained scenarios. I also explore how explainable AI (XAI) techniques can inadvertently leak sensitive model information, and how such vulnerabilities can be exploited and mitigated.

I develop attack frameworks and privacy-preserving mechanisms across various domains including genomic data, web platforms, and ML-as-a-Service environments. I actively build tools and conduct evaluations that push the boundaries of adversarial machine learning research while maintaining a strong interest in policy, ethics, and law through my parallel legal education.

My work focuses on:

🎓 Education

Ph.D. in Computer Science @ Case Western Reserve University (Present)

LL.B. in Law @ Ankara University (Present)

M.S. in Computer Science @ Bilkent University (August 2020)

B.S. in Computer Science @ Bilkent University (June 2017) (Minor in Psychology)

Erasmus+ Exchange Student @ Roskilde University (June 2015)

💼 Experience

Graduate Researcher @ Case Western Reserve University (Aug 2021 – present)

Graduate Researcher @ Bilkent University (Sep 2017 – Aug 2020)

Software Engineering Intern @ ASELSAN (Jul 2015 – Aug 2015)

Software Engineering Intern @ Nokta Medya (Aug 2014- Sep 2014)

🔬 Publications

Find my latest research on: Google Scholar

AUTOLYCUS: Exploiting Explainable Artificial Intelligence (XAI) for Model Extraction Attacks against Interpretable Models

This study proposes a model extraction attack that leverages explainable AI (XAI) techniques to compromise and steal interpretable models. (GitHub Repo) (doi)

Privacy-preserving and robust watermarking on sequential genome data using belief propagation and local differential privacy

This study proposes a robust and privacy-preserving genomic watermarking scheme that embeds identifiable markers into sequential genome data using belief propagation and local differential privacy to resist collusion and inference attacks. (GitHub Repo) (doi)

🔍 Projects

Follower Analyzer

A graph-based data analyzer on Matlab to extract, evaluate and visualize follower interactions on Instagram. Utilized for keeping track of followers, follower interactions, and notify users for followers who quit following.

Family Linker

A Python application that automates web scraping and ETL pipelines to extract structured social and familial relationship data of Northeast Ohio residents from voter registration and people finding web sites. Utilized Jupyter, and libraries BeautifulSoup, selenium, chromium, pandas and numpy to automate linkage and analysis. (GitHub Repo)

Spoiler Blocker

A Chrome extension designed to block spoiler content annotated by users on websites. Developed using React, Django, MySQL, and JavaScript, deployed via Google Cloud, with version control managed through Git. (GitHub Repo)

📬 Contact

If you’d like to collaborate or have any questions, feel free to reach out:

Thank you for visiting! • Powered by GitHub Pages