Center News

Congratulations!

Congratulations Sheida Nabavi on your SPARK Grant

 

Funding Agency: UConn FY24 SPARK Technology Commercialization Fund 

Title: AI-CAD for Breast Cancer Screening 

Amount: $50,000 (with the possibility of an additional $50,000)

Dates: March 1, 2024 to April 30, 2025

Congratulations Caiwen Ding on your Amazon Grant

 

Funding Agency: Amazon

Title: Graph of Thought: Boosting Logical Reasoning in Large Language Models

Amount: $70,000 Cash + $50,000 AWS Credits

Dates: April 2024- March 2025

CyberSEED 2024

We had another great CyberSEED event this past Saturday March 23, 2024 with 96 teams with 226 students. We had some intense competition with the top team being the only one to solve all of the challenges. The briefing presentations ended up being a pretty deciding factor in the placement of the top teams and is such a valuable component of the experience for the students.

Congratulations to the top 10 teams, UConn coming in 2nd place!
Award Ceremony Presentation

Congratulations Caiwen Ding and Dongjin Song on your NSF CAREER Awards!

Caiwen Ding

Caiwen Ding

Congratulations Caiwen Ding on receiving a National Science Foundation CAREER Award for his proposal titled “CAREER: Algorithm-Hardware Co-design of Efficient Large Graph Machine Learning for Electronic Design Automation”. The goal is the project is to address the efficiency and scalability of using graph learning for Electronic Design Automation, thought a series of algorithm-hardware codesign approaches.

Caiwen Ding is an assistant professor in the School of Computing at the University of Connecticut. He received his Ph.D. degree from Northeastern University (NEU), Boston in 2019,  supervised by Prof. Yanzhi Wang.  His interests include Algorithm-system co-design of machine learning/artificial intelligence, privacy-preserving machine learning, machine learning for electronic design automation (EDA), and neuromorphic computing. He is a recipient of the 2024 CISCO Research Award and NSF CAREER Award. He received the best paper nomination at 2018 DATE and 2021 DATE, the best paper award at the DL-Hardware Co-Design for AI Acceleration (DCAA) workshop at 2023 AAAI, outstanding student paper award at 2023 HPEC, publicity paper at 2022 DAC, and the 2021 Excellence in Teaching Award from UConn Provost. His team won first place in accuracy and fourth place overall at the 2022 TinyML Design Contest at ICCAD. He was ranked among Stanford’s World’s Top 2% Scientists in 2023. His research has been mainly funded by NSF, DOE, DOT, USDA, SRC, and multiple industrial sponsors.

Abstract: Estimating Power, Performance, and Area (PPA) earlier in the electronic design automation (EDA) flow would improve the Quality of Results (QoR) and reliability in chip design. The classical analytical or heuristic methods can be challenging to fine-tune, especially for complex problems. Machine learning (ML) methods have proven to be effective in addressing these problems. Graph Neural Networks (GNNs) have gained popularity since they are among the most natural ways to represent the fundamental objects in the EDA flow. However, with increased design complexity and chip capacity, an increasing performance gap exists between the extremely large graphs in EDA and the insufficient support from general-purpose hardware, such as mainstream graphics processing units (GPUs). This project aims to expedite the large graph machine learning on various EDA tasks, through a full-fledged development of efficient and scalable computing paradigms. This project's novelties are EDA domain knowledge-aware graph machine learning, training acceleration, and algorithm-hardware co-design and optimization. The project's broader significance and importance include: (1) to advance the field of machine learning in chip design, highlighted in National Artificial Intelligence Initiative; (2) to deepen the understanding of interactions among EDA domain knowledge, graph learning, and GPU acceleration; (3) to enrich the computer engineering curriculum and promote participation from undergraduates, underrepresented groups, and K-12 students in STEM fields through relevant programs.

Dongjin Song

Dongjin Song

Project Framework

Dongjin Song

Congratulations Dongjin Song on receiving the prestigious National Science Foundation (NSF) CAREER Award to support his research project titled "CAREER: Towards Continual Learning on Evolving Graphs: from Memorization to Generalization". This project will develop a generic machine learning paradigm, Continual Learning on Evolving Graphs (CoLEG), to resolve the catastrophic forgetting problem by retaining essential structural information and temporal dynamics, ensure the generalization capability, and address real-world applications on evolving graphs. Specifically, he not only plans to tackle the catastrophic forgetting issue in structural evolving graphs via graph sparsification and topology-aware embedding, but also aims to develop new algorithms to incorporate structural and temporal dynamic patterns of evolving graphs under different regimes, resolve the task-free challenge, and reveal high-order dependencies. He will also develop novel solutions to pursue and imp pre-trained models and facilitate test-of-time adaptation to ensure the generalization over unforeseen scenarios.

Dongjin Song has been an assistant professor in the School of Computing, University of Connecticut since Fall 2020. He was previously a research staff member at NEC Labs America in Princeton, NJ. He received his Ph.D. degree in the ECE Department from the University of California San Diego (UCSD) in 2016. His research interests include machine learning, data science, deep learning, and related applications for time series data analysis and graph representation learning. Papers describing his research have been published at top-tier data science and artificial intelligence conferences, such as NeurIPS, ICML, KDD, ICDM, SDM, AAAI, IJCAI, ICLR, CVPR, ICCV, etc. He is an Associate Editor for Neurocomputing and has served as Senior PC for AAAI, IJCAI, and CIKM. He received the prestigious NSF CAREER award in 2024 and the UConn Research Excellence Research (REP) Award in 2021. He has co-organized the AI for Time Series (AI4TS) Workshop at IJCAI, AAAI, ICDM, SDM, and MiLeTS workshops at KDD.

Abstract: In the modern big data era, data often grows continuously and its interconnections and temporal dynamics evolve. To cope with the continuous evolution in data, an intelligent agent needs to incrementally acquire, perceive, accumulate, and exploit structural and temporal dynamic knowledge throughout its lifetime. This project aims to develop a generic machine learning paradigm to conduct Continual Learning on Evolving Graphs (CoLEG). The success of this project will 1) benefit critical infrastructure (such as social networks, transportation, and renewable energy) and human welfare (in the form of, for example, improvements in healthcare and epidemiology), 2) provide an ideal platform for composing the areas of graph representation learning, time series analysis, continual learning, and causal analysis, and 3) develop open-source tools for evolving graphs that can advance diverse topics such as node classification, link prediction, and temporal forecasting, improve our knowledge of the physical world, and contribute to real-world applications. This project will also 1) engage high school students in research and outreach to K-12 teachers and students, 2) broaden the participation of underrepresented groups especially female and low-income students in STEM, and 3) educate undergraduate and graduate students through the development of new course modules in data mining and machine learning.

CACC Supported Tan Zhu’s NeurIPS travel.

Congratulations Tan Zhu for your paper "Polyhedron Attention Module: Learning Adaptive-order Interactions," being accepted for presentation at the conference of Neural Information Processing Systems (NeurIPS).

Can you summarize your research area?

My research interests lie primarily in developing novel DNN architectures for recommendation system on mental health disorder diagnostic and the click-through rate prediction, and reinforcement learning algorithms focusing on deep stochastic contexture bandit problem and Monte Carlo tree search.

What is the overarching goal of your graduate study?

My overarching goal is to improve DNN’s interpretability and the performance by conducting feature selection with deep reinforcement learning and incorporate novel feature interactions with trainable complexity into the training process of DNNs.

How do you hope that you will have changed computing in five years?

In the next five years, in addition to developing interpretation methods for DNNs, I’m going to explore the feature selection and dataset distillation algorithms utilizing the model interpretations of DNNs. With the interpretable knowledge extracted from the state-of-the-art DNN models, it's possible to efficiently and elegantly downscale large datasets, and reduce the time and space complexity of on the training of large DNNs. Over the past few years, Large Language Models (LLMs) have undergone significant development, marking a transformative period in the field of artificial intelligence and natural language processing. Given these challenges, I am confident that my research can offer valuable contributions to both the academic and industrial works in this area.

How does additional support allow you to more effectively complete your graduate study?

I'm really thankful for the support I've received during my graduate studies. Prof. Bi's guidance has been incredibly valuable, helping me grow academically and professionally. The support provided by the CACC and the Computer Science department give me a collaborative environment, which greatly enriched my learning experience. The availability of high-performance computing resources allows me to engage in advanced deep learning research, which demands substantial computational power.

What are you hoping to do upon graduation?

Upon graduation, I’m planning to transition into the industry.

(For papers) What is the major improvement made in this work?  What consequences does this improvement have for the field in general?

Our Polyhedron Attention Module (PAM) could adaptively learn interactions with different complexity for different samples, and in our theoretic analysis, we showed that PAM has stronger expression capability than ReLU-activated networks. Extensive experimental results demonstrate the state-of-the-art classification performance of PAM on massive datasets of the click-through rate prediction and PAM can learn meaningful interaction effects in a medical problem. These improvements not only set new benchmarks in click-through rate prediction but also underscores the growing importance of model transparency in AI.

 

Tan Zhu

Tan Zhu

"The NeurIPS conference offered a comprehensive overview of current research trends, including developments in large language models, knowledge distillation, and reinforcement learning. The most notable aspect was the researchers' emphasis on applying large language models to various research fields, demonstrating the significant potential of these models in addressing diverse challenges".

Congratulations Bin Lei, Caiwen Ding, Le Chen, Pei-Hung Lin, and Chunhua Liao on having your paper accepted by the HPEC 2023 conference and receiving the Outstanding Student Paper Award.

Creating a Dataset for High-Performance Computing Code Translation using LLMs: A Bridge Between OpenMP Fortran and C++

In this study, we present a novel dataset for training machine learning models translating between OpenMP Fortran and C++ code. To ensure reliability and applicability, the dataset is created from a range of representative open-source OpenMP benchmarks. It is also refined using a meticulous code similarity test. The effectiveness of our dataset is assessed using both quantitative (CodeBLEU) and qualitative (human evaluation) methods. We showcase how this dataset significantly elevates the translation competencies of large language models (LLMs). Specifically, models without prior coding knowledge experienced a boost of × 5.1 in their CodeBLEU scores, while models with some coding familiarity saw an impressive × 9.9-fold increase.

Read more

hpeclogo

CACC is delighted on supporting Shaoyi Huang’s NeurIPS travel, which will be our aim to provide support in travel for other CACC faculty students

Congratulations Shaoyi Huang on your paper privacy-preserving machine learning acceleration being accepted for presentation at NeurIPS 2023.

Can you summarize your research area?

My research focuses on efficient machine learning on general AI systems, including efficient inference and training algorithms, algorithm and hardware co-design for AI acceleration, energy-efficient deep learning and artificial intelligence systems and privacy preserving machine learning.

What is the overarching goal of your graduate study?

My graduate studies are dedicated to spearheading the development of efficient machine learning, focusing particularly on addressing the computational and energy challenges of Deep Neural Networks (DNNs). My objective is to develop cutting-edge solutions in model compression and efficient training algorithms, alongside optimizing system design. The aim is not only to improve the performance of DNNs but also to reduce the environmental footprint of their training and inference process.

My approach is characterized by an intensive investigation into model compression and sparse training techniques, optimization algorithms, and the synergy between algorithms and diverse hardware platforms, which includes GPUs, FPGAs, and emerging technologies like ReRAM. The goal is to catalyze the emergence of neural networks that are not just sustainable and scalable, but also democratically accessible and ethically responsible.

How do you hope that you will have changed computing in five years?

In the next five years, I am committed to continuing my work in the field of efficient machine learning and AI systems. My objective is to spearhead a series of breakthroughs, particularly in enhancing energy efficiency—a cornerstone for sustainable technological advancement. Through an integrative approach of algorithm and hardware co-design, I foresee my efforts contributing to more synergistic and robust AI systems.

The democratization of AI is another pivotal aspect of my vision. I intend to break down barriers, making sophisticated AI tools accessible to a wider audience and facilitating their integration into a myriad of devices. By doing so, AI will not only serve the few but empower the many, transcending traditional technological limitations.

Moreover, my enthusiasm for refining the intricacies of large language models and generative AI is unwavering. These areas are ripe with potential to revolutionize how we interact with and benefit from artificial intelligence. By fostering innovative algorithm development alongside hardware co-design, I am confident that we can make AI use more sustainable, ethically grounded, and impactful.

My aspiration is not merely to advance the field in academic or technical terms but to ensure these improvements lead to tangible benefits for society. By driving these changes, I hope to play a part in shaping a future where AI is not only more efficient but also more aligned with the ethical and practical needs of our global community.

How does additional support allow you to more effectively complete your graduate study?

During my Ph.D. study, besides the mentorship from my advisors Prof. Caiwen Ding and Prof. Omer Khan, I received multiple forms of additional support, such as fellowship from the Computer Science Department, CACC, Cigna, Eversource and Student Travel grant from Workshop for Women in Hardware and Security, advanced computational resources from the lab. They are instrumental in enhancing the effectiveness and scope of my graduate research, pushing me to a higher level. Financial assistance alleviates the burden of tuition and living expenses, enabling me to dedicate more time to my studies and research, delving deeply into complex problems and innovating in the field of efficient machine learning. Access to state-of-the-art GPUs allows me to experiment with large-scale models and datasets, conduct extensive experiments and simulations and verify the effectiveness of designs more rapidly. This is particularly crucial in the field which is resource-intensive as deep learning, especially nowadays large language model exploration.

What are you hoping to do upon graduation?

I hope to be an assistant professor after graduation, and I am on the job market this year.

(For papers) What is the major improvement made in this work?  What consequences does this improvement have for the field in general?

The major improvement made in this work is the development of a Structural Linearized Graph Convolutional Network (LinGCN) that optimizes the performance of Homomorphically Encrypted (HE) based GCN inference, reducing multiplication depth and addressing HE computation overhead.

This improvement has significant consequences for the field in general, as it enables the deployment of GCNs in the cloud while preserving data privacy. Additionally, the proposed framework can be applied to other machine learning models besides GCNs, making it a valuable contribution to the field of privacy-preserving machine learning.

Shaoyi Huang

Shaoyi Huang

"Attending this year's conference would significantly enrich my experience, providing more opportunities to engage with experts in the field, strengthen my professional network, and enhance my prospects for a future faculty position. Therefore, receiving support from CACC for this trip would be invaluable to my career, helping me realize my dream''.

Amid increasing demand, CT colleges in arms race to add cybersecurity programs, faculty

Man presenting

With thousands of cybersecurity job openings around the state — and entry-level positions that can command a six-figure starting salary — training the next generation of security engineers is a key challenge for Connecticut.

Colleges around the state say the fast-changing curriculum, difficulty of retaining expert faculty, importance of linking closely to industry, and looming challenge of AI make cybersecurity one of the most dynamic fields in education right now.

Another challenge is the ever-widening circle of people who need to be trained in combating cyberattacks.

Benjamin Fuller

“It’s ​not ​going ​to ​be ​good ​enough ​for ​there ​to ​be 10% ​or ​15% ​of ​computer ​scientists ​who ​fix ​everybody ​else’s ​problems,” said Benjamin Fuller, an associate professor in the computer science department at the University of Connecticut.

Read more at the Hartford Business.com

 

Four From UConn Named Fellows By AAAS

The official University of Connecticut seal, in painted gold on an oak panel.

The AAAS is the world’s largest general scientific society.

Four University of Connecticut faculty members have been elected by the American Association for the Advancement of Science (AAAS) to its newest class of fellows. The AAAS is the world’s largest general scientific society and publisher of the Science family of journals.

The four are:

* Bahram Javidi, a professor in the Department of Electrical and Computer Engineering Department in the School of Engineering

* James Magnuson, a professor in the Department of Psychological Sciences in the College of Liberal Arts and Sciences

* Wolfgang Peti, a professor in the Department of Molecular Biology and Biophysics at UConn School of Medicine

* Anthony Vella, a professor and chair of the Department of Immunology at UConn School of Medicine and the Senior Associate Dean for Research Planning and Coordination

Read more