Ali Safaya

I am currently in my third year as a Computer Science PhD student at Koç University under the supervision of my advisor Deniz Yuret . Additionally, I am an AI Research Fellow at KUIS AI Center in Istanbul. My primary research focus is on memory augmentation for language models.

Email  /  CV  /  Semantic Scholar  /  Google Scholar  /  Twitter  /  Github

profile photo
Research

I am interested in natural language processing, language modeling (both large and small models), and deep learning in general. My PhD research is primarily focused on separating semantic and episodic information in language models, as well as developing improved memory architectures for modeling long sequences with Transformers.

In addition, I am an active member of Turkish Data Depository project, an open-source repository for data and tools for the Turkish language. I am responsible for managing TDD's dataset hub and for overseeing Mukayese, a collection of Turkish language benchmarks, including the newly released MukayeseLLM leaderboard.

Mukayese Mukayese: Turkish NLP Strikes Back
Ali Safaya, Emirhan Kurtulus, Arda Goktogan, Deniz Yuret
Findings of ACL 2022

Project page / Video / ArXiv

Mukayese is a collection of NLP benchmarks for the Turkish language, consisting of seven leaderboards for different NLP tasks. For each benchmark, we work with one or more datasets and present two or more baselines.

Blog Posts

- Understanding the Impact of Token Redundancy in Language Models
- BERT-CNN for Detecting Offensive Speech
- Arabic-ALBERT: Pre-training Arabic Language models

This website is based on Jon Barron website's source code.