|
Yen-Hsiang Chang
Email  / 
CV  / 
Github
My research interests lie in the general area of high-performance computing, particularly in parallel programming and algorithms, with the focus on mitigating load imbalance in parallel applications and designing memory-efficient parallel algorithms.
I am an EECS PhD student at UC Berkeley, co-advisded by Jim Demmel and Aydın Buluç. I am associated with the BeBOP and the PASSION groups.
I received my Bachelor's degree in Computer Engineering from UIUC in 2022, with Highest Honors and with a minor in Mathematics. I have had the privilege to work with Prof. Wen-mei Hwu, Prof. Rakesh Nagi, and Prof. Jinjun Xiong at Illinois. If you would like to learn more about me, please see my résumé or contact me via email.
|
Research
|
[2]  Parallelizing Maximal Clique Enumeration on GPUs
Mohammad Almasri*,
Yen-Hsiang Chang*,
Izzat El Hajj,
Rakesh Nagi,
Jinjun Xiong,
Wen-mei Hwu (*Equal contribution)
32nd International Conference on Parallel Architectures and Compilation Techniques (PACT'23)
arxiv  /  code
Instead of the breadth-first approaches used in prior GPU works, we propose to parallelize maximal clique enumeration on GPUs by performing depth-first traversal using the Bron-Kerbosch algorithm. We also propose a worker list for dynamic load balancing, as well as partial induced subgraphs and a compact representation of excluded vertex sets to regulate memory consumption. Our evaluation shows that our implementation on a single GPU outperforms the state-of-the-art parallel CPU implementation by a geometric mean of 4.9X (up to 16.7X), and scales efficiently to multiple GPUs.
|
[1]  MLHarness: A Scalable Benchmarking System for MLCommons
Yen-Hsiang Chang,
Jianhao Pu,
Jinjun Xiong,
Wen-mei Hwu
2021 BenchCouncil International Symposium on Benchmarking, Measuring and Optimizing (Bench'21)
arxiv  /  code  /  presentation
We propose MLHarness, a scalable benchmarking harness system for MLCommons Inference developed on top of the MLModelScope system with three distinctive features: (1) it codifies the standard benchmark process as defined by MLCommons Inference including the models, datasets, DL frameworks, and software and hardware systems; (2) it provides an easy and declarative approach for model developers to contribute their models and datasets to MLCommons Inference; and (3) it includes the support of a wide range of models with varying inputs/outputs modalities so that we can scalably benchmark these models across different datasets, frameworks, and hardware systems.
|
Honors & Awards
International
-
2022 Google Hash Code World Finals, 17th Place
-
44th Annual ICPC World Finals Championship, Bronze Medal (11th Place)
-
2021 Google Code Jam Round 3, 163rd Place
-
Microsoft Q# Coding Contest – Summer 2020, 6th Place
-
2020 Topcoder Open Algorithm Competition, Round 4 Qualifier (Top 110)
-
2020 Google Code Jam Round 3, 132nd Place
-
2019 Google Code Jam Round 3, 112th Place
|
Domestic
-
2021 UIUC ECE Alumni Association Scholarship
-
2020 UIUC Robert M. Janowiak Scholarship
-
2020 UIUC & Michigan Correlation One’s Terminal Live, 4th Place
-
2020 ICPC North America Championship, Midwest Champion (10th Place)
-
2019 ICPC Mid-Central USA Programming Contest, 1st Place
-
2018-2022 UIUC Dean's List of Grainger College of Engineering
|
|
This is an academic website for Yen-Hsiang Chang to share his experiences, projects, publications.
The style of this website is borrowed from Dr. Jon Barron's.
|
|