Author: Guilbeault, Jessica

We are Pleased to Welcome Rahul Jayachandran and Joel Duah For Our Summer 2024 REU Student

“Hi, I’m Rahul Jayachandran and I’m a rising college sophomore from Glastonbury, CT. I am pursuing a degree in computer science and math and am excited to spend this summer conducting research at UConn. In my free time I enjoy rowing and playing piano.”

 

“Hello, I’m Joel Duah, a junior at UConn from Manchester, CT. I’m pursuing a major in Computer Science and Engineering with a concentration in Computational Data Analytics. I’m enthusiastic about applying my knowledge from past semesters to this research opportunity, particularly in exploring voting patterns in Connecticut. Through this REU, I aim to increase my proficiency in handling data and making it more accessible to individuals without technical backgrounds. Outside of academics, I find joy in hiking and reading.”

 

Congratulations!

Congratulations Sheida Nabavi on your SPARK Grant

 

Funding Agency: UConn FY24 SPARK Technology Commercialization Fund 

Title: AI-CAD for Breast Cancer Screening 

Amount: $50,000 (with the possibility of an additional $50,000)

Dates: March 1, 2024 to April 30, 2025

Congratulations Caiwen Ding on your Amazon Grant

 

Funding Agency: Amazon

Title: Graph of Thought: Boosting Logical Reasoning in Large Language Models

Amount: $70,000 Cash + $50,000 AWS Credits

Dates: April 2024- March 2025

CyberSEED 2024

We had another great CyberSEED event this past Saturday March 23, 2024 with 96 teams with 226 students. We had some intense competition with the top team being the only one to solve all of the challenges. The briefing presentations ended up being a pretty deciding factor in the placement of the top teams and is such a valuable component of the experience for the students.

Congratulations to the top 10 teams, UConn coming in 2nd place!
Award Ceremony Presentation

Congratulations Caiwen Ding and Dongjin Song on your NSF CAREER Awards!

Caiwen Ding

Caiwen Ding

Congratulations Caiwen Ding on receiving a National Science Foundation CAREER Award for his proposal titled “CAREER: Algorithm-Hardware Co-design of Efficient Large Graph Machine Learning for Electronic Design Automation”. The goal is the project is to address the efficiency and scalability of using graph learning for Electronic Design Automation, thought a series of algorithm-hardware codesign approaches.

Caiwen Ding is an assistant professor in the School of Computing at the University of Connecticut. He received his Ph.D. degree from Northeastern University (NEU), Boston in 2019,  supervised by Prof. Yanzhi Wang.  His interests include Algorithm-system co-design of machine learning/artificial intelligence, privacy-preserving machine learning, machine learning for electronic design automation (EDA), and neuromorphic computing. He is a recipient of the 2024 CISCO Research Award and NSF CAREER Award. He received the best paper nomination at 2018 DATE and 2021 DATE, the best paper award at the DL-Hardware Co-Design for AI Acceleration (DCAA) workshop at 2023 AAAI, outstanding student paper award at 2023 HPEC, publicity paper at 2022 DAC, and the 2021 Excellence in Teaching Award from UConn Provost. His team won first place in accuracy and fourth place overall at the 2022 TinyML Design Contest at ICCAD. He was ranked among Stanford’s World’s Top 2% Scientists in 2023. His research has been mainly funded by NSF, DOE, DOT, USDA, SRC, and multiple industrial sponsors.

Abstract: Estimating Power, Performance, and Area (PPA) earlier in the electronic design automation (EDA) flow would improve the Quality of Results (QoR) and reliability in chip design. The classical analytical or heuristic methods can be challenging to fine-tune, especially for complex problems. Machine learning (ML) methods have proven to be effective in addressing these problems. Graph Neural Networks (GNNs) have gained popularity since they are among the most natural ways to represent the fundamental objects in the EDA flow. However, with increased design complexity and chip capacity, an increasing performance gap exists between the extremely large graphs in EDA and the insufficient support from general-purpose hardware, such as mainstream graphics processing units (GPUs). This project aims to expedite the large graph machine learning on various EDA tasks, through a full-fledged development of efficient and scalable computing paradigms. This project's novelties are EDA domain knowledge-aware graph machine learning, training acceleration, and algorithm-hardware co-design and optimization. The project's broader significance and importance include: (1) to advance the field of machine learning in chip design, highlighted in National Artificial Intelligence Initiative; (2) to deepen the understanding of interactions among EDA domain knowledge, graph learning, and GPU acceleration; (3) to enrich the computer engineering curriculum and promote participation from undergraduates, underrepresented groups, and K-12 students in STEM fields through relevant programs.

Dongjin Song

Dongjin Song

Project Framework

Dongjin Song

Congratulations Dongjin Song on receiving the prestigious National Science Foundation (NSF) CAREER Award to support his research project titled "CAREER: Towards Continual Learning on Evolving Graphs: from Memorization to Generalization". This project will develop a generic machine learning paradigm, Continual Learning on Evolving Graphs (CoLEG), to resolve the catastrophic forgetting problem by retaining essential structural information and temporal dynamics, ensure the generalization capability, and address real-world applications on evolving graphs. Specifically, he not only plans to tackle the catastrophic forgetting issue in structural evolving graphs via graph sparsification and topology-aware embedding, but also aims to develop new algorithms to incorporate structural and temporal dynamic patterns of evolving graphs under different regimes, resolve the task-free challenge, and reveal high-order dependencies. He will also develop novel solutions to pursue and imp pre-trained models and facilitate test-of-time adaptation to ensure the generalization over unforeseen scenarios.

Dongjin Song has been an assistant professor in the School of Computing, University of Connecticut since Fall 2020. He was previously a research staff member at NEC Labs America in Princeton, NJ. He received his Ph.D. degree in the ECE Department from the University of California San Diego (UCSD) in 2016. His research interests include machine learning, data science, deep learning, and related applications for time series data analysis and graph representation learning. Papers describing his research have been published at top-tier data science and artificial intelligence conferences, such as NeurIPS, ICML, KDD, ICDM, SDM, AAAI, IJCAI, ICLR, CVPR, ICCV, etc. He is an Associate Editor for Neurocomputing and has served as Senior PC for AAAI, IJCAI, and CIKM. He received the prestigious NSF CAREER award in 2024 and the UConn Research Excellence Research (REP) Award in 2021. He has co-organized the AI for Time Series (AI4TS) Workshop at IJCAI, AAAI, ICDM, SDM, and MiLeTS workshops at KDD.

Abstract: In the modern big data era, data often grows continuously and its interconnections and temporal dynamics evolve. To cope with the continuous evolution in data, an intelligent agent needs to incrementally acquire, perceive, accumulate, and exploit structural and temporal dynamic knowledge throughout its lifetime. This project aims to develop a generic machine learning paradigm to conduct Continual Learning on Evolving Graphs (CoLEG). The success of this project will 1) benefit critical infrastructure (such as social networks, transportation, and renewable energy) and human welfare (in the form of, for example, improvements in healthcare and epidemiology), 2) provide an ideal platform for composing the areas of graph representation learning, time series analysis, continual learning, and causal analysis, and 3) develop open-source tools for evolving graphs that can advance diverse topics such as node classification, link prediction, and temporal forecasting, improve our knowledge of the physical world, and contribute to real-world applications. This project will also 1) engage high school students in research and outreach to K-12 teachers and students, 2) broaden the participation of underrepresented groups especially female and low-income students in STEM, and 3) educate undergraduate and graduate students through the development of new course modules in data mining and machine learning.

CACC Supported Tan Zhu’s NeurIPS travel.

Congratulations Tan Zhu for your paper "Polyhedron Attention Module: Learning Adaptive-order Interactions," being accepted for presentation at the conference of Neural Information Processing Systems (NeurIPS).

Can you summarize your research area?

My research interests lie primarily in developing novel DNN architectures for recommendation system on mental health disorder diagnostic and the click-through rate prediction, and reinforcement learning algorithms focusing on deep stochastic contexture bandit problem and Monte Carlo tree search.

What is the overarching goal of your graduate study?

My overarching goal is to improve DNN’s interpretability and the performance by conducting feature selection with deep reinforcement learning and incorporate novel feature interactions with trainable complexity into the training process of DNNs.

How do you hope that you will have changed computing in five years?

In the next five years, in addition to developing interpretation methods for DNNs, I’m going to explore the feature selection and dataset distillation algorithms utilizing the model interpretations of DNNs. With the interpretable knowledge extracted from the state-of-the-art DNN models, it's possible to efficiently and elegantly downscale large datasets, and reduce the time and space complexity of on the training of large DNNs. Over the past few years, Large Language Models (LLMs) have undergone significant development, marking a transformative period in the field of artificial intelligence and natural language processing. Given these challenges, I am confident that my research can offer valuable contributions to both the academic and industrial works in this area.

How does additional support allow you to more effectively complete your graduate study?

I'm really thankful for the support I've received during my graduate studies. Prof. Bi's guidance has been incredibly valuable, helping me grow academically and professionally. The support provided by the CACC and the Computer Science department give me a collaborative environment, which greatly enriched my learning experience. The availability of high-performance computing resources allows me to engage in advanced deep learning research, which demands substantial computational power.

What are you hoping to do upon graduation?

Upon graduation, I’m planning to transition into the industry.

(For papers) What is the major improvement made in this work?  What consequences does this improvement have for the field in general?

Our Polyhedron Attention Module (PAM) could adaptively learn interactions with different complexity for different samples, and in our theoretic analysis, we showed that PAM has stronger expression capability than ReLU-activated networks. Extensive experimental results demonstrate the state-of-the-art classification performance of PAM on massive datasets of the click-through rate prediction and PAM can learn meaningful interaction effects in a medical problem. These improvements not only set new benchmarks in click-through rate prediction but also underscores the growing importance of model transparency in AI.

 

Tan Zhu

Tan Zhu

"The NeurIPS conference offered a comprehensive overview of current research trends, including developments in large language models, knowledge distillation, and reinforcement learning. The most notable aspect was the researchers' emphasis on applying large language models to various research fields, demonstrating the significant potential of these models in addressing diverse challenges".

Congratulations Bin Lei, Caiwen Ding, Le Chen, Pei-Hung Lin, and Chunhua Liao on having your paper accepted by the HPEC 2023 conference and receiving the Outstanding Student Paper Award.

Creating a Dataset for High-Performance Computing Code Translation using LLMs: A Bridge Between OpenMP Fortran and C++

In this study, we present a novel dataset for training machine learning models translating between OpenMP Fortran and C++ code. To ensure reliability and applicability, the dataset is created from a range of representative open-source OpenMP benchmarks. It is also refined using a meticulous code similarity test. The effectiveness of our dataset is assessed using both quantitative (CodeBLEU) and qualitative (human evaluation) methods. We showcase how this dataset significantly elevates the translation competencies of large language models (LLMs). Specifically, models without prior coding knowledge experienced a boost of × 5.1 in their CodeBLEU scores, while models with some coding familiarity saw an impressive × 9.9-fold increase.

Read more

hpeclogo