Center News

CACC Supported Tan Zhu’s NeurIPS travel.

Congratulations Tan Zhu for your paper "Polyhedron Attention Module: Learning Adaptive-order Interactions," being accepted for presentation at the conference of Neural Information Processing Systems (NeurIPS).

Can you summarize your research area?

My research interests lie primarily in developing novel DNN architectures for recommendation system on mental health disorder diagnostic and the click-through rate prediction, and reinforcement learning algorithms focusing on deep stochastic contexture bandit problem and Monte Carlo tree search.

What is the overarching goal of your graduate study?

My overarching goal is to improve DNN’s interpretability and the performance by conducting feature selection with deep reinforcement learning and incorporate novel feature interactions with trainable complexity into the training process of DNNs.

How do you hope that you will have changed computing in five years?

In the next five years, in addition to developing interpretation methods for DNNs, I’m going to explore the feature selection and dataset distillation algorithms utilizing the model interpretations of DNNs. With the interpretable knowledge extracted from the state-of-the-art DNN models, it's possible to efficiently and elegantly downscale large datasets, and reduce the time and space complexity of on the training of large DNNs. Over the past few years, Large Language Models (LLMs) have undergone significant development, marking a transformative period in the field of artificial intelligence and natural language processing. Given these challenges, I am confident that my research can offer valuable contributions to both the academic and industrial works in this area.

How does additional support allow you to more effectively complete your graduate study?

I'm really thankful for the support I've received during my graduate studies. Prof. Bi's guidance has been incredibly valuable, helping me grow academically and professionally. The support provided by the CACC and the Computer Science department give me a collaborative environment, which greatly enriched my learning experience. The availability of high-performance computing resources allows me to engage in advanced deep learning research, which demands substantial computational power.

What are you hoping to do upon graduation?

Upon graduation, I’m planning to transition into the industry.

(For papers) What is the major improvement made in this work?  What consequences does this improvement have for the field in general?

Our Polyhedron Attention Module (PAM) could adaptively learn interactions with different complexity for different samples, and in our theoretic analysis, we showed that PAM has stronger expression capability than ReLU-activated networks. Extensive experimental results demonstrate the state-of-the-art classification performance of PAM on massive datasets of the click-through rate prediction and PAM can learn meaningful interaction effects in a medical problem. These improvements not only set new benchmarks in click-through rate prediction but also underscores the growing importance of model transparency in AI.

 

Tan Zhu

Tan Zhu

"The NeurIPS conference offered a comprehensive overview of current research trends, including developments in large language models, knowledge distillation, and reinforcement learning. The most notable aspect was the researchers' emphasis on applying large language models to various research fields, demonstrating the significant potential of these models in addressing diverse challenges".

Congratulations Bin Lei, Caiwen Ding, Le Chen, Pei-Hung Lin, and Chunhua Liao on having your paper accepted by the HPEC 2023 conference and receiving the Outstanding Student Paper Award.

Creating a Dataset for High-Performance Computing Code Translation using LLMs: A Bridge Between OpenMP Fortran and C++

In this study, we present a novel dataset for training machine learning models translating between OpenMP Fortran and C++ code. To ensure reliability and applicability, the dataset is created from a range of representative open-source OpenMP benchmarks. It is also refined using a meticulous code similarity test. The effectiveness of our dataset is assessed using both quantitative (CodeBLEU) and qualitative (human evaluation) methods. We showcase how this dataset significantly elevates the translation competencies of large language models (LLMs). Specifically, models without prior coding knowledge experienced a boost of × 5.1 in their CodeBLEU scores, while models with some coding familiarity saw an impressive × 9.9-fold increase.

Read more

hpeclogo

CACC is delighted on supporting Shaoyi Huang’s NeurIPS travel, which will be our aim to provide support in travel for other CACC faculty students

Congratulations Shaoyi Huang on your paper privacy-preserving machine learning acceleration being accepted for presentation at NeurIPS 2023.

Can you summarize your research area?

My research focuses on efficient machine learning on general AI systems, including efficient inference and training algorithms, algorithm and hardware co-design for AI acceleration, energy-efficient deep learning and artificial intelligence systems and privacy preserving machine learning.

What is the overarching goal of your graduate study?

My graduate studies are dedicated to spearheading the development of efficient machine learning, focusing particularly on addressing the computational and energy challenges of Deep Neural Networks (DNNs). My objective is to develop cutting-edge solutions in model compression and efficient training algorithms, alongside optimizing system design. The aim is not only to improve the performance of DNNs but also to reduce the environmental footprint of their training and inference process.

My approach is characterized by an intensive investigation into model compression and sparse training techniques, optimization algorithms, and the synergy between algorithms and diverse hardware platforms, which includes GPUs, FPGAs, and emerging technologies like ReRAM. The goal is to catalyze the emergence of neural networks that are not just sustainable and scalable, but also democratically accessible and ethically responsible.

How do you hope that you will have changed computing in five years?

In the next five years, I am committed to continuing my work in the field of efficient machine learning and AI systems. My objective is to spearhead a series of breakthroughs, particularly in enhancing energy efficiency—a cornerstone for sustainable technological advancement. Through an integrative approach of algorithm and hardware co-design, I foresee my efforts contributing to more synergistic and robust AI systems.

The democratization of AI is another pivotal aspect of my vision. I intend to break down barriers, making sophisticated AI tools accessible to a wider audience and facilitating their integration into a myriad of devices. By doing so, AI will not only serve the few but empower the many, transcending traditional technological limitations.

Moreover, my enthusiasm for refining the intricacies of large language models and generative AI is unwavering. These areas are ripe with potential to revolutionize how we interact with and benefit from artificial intelligence. By fostering innovative algorithm development alongside hardware co-design, I am confident that we can make AI use more sustainable, ethically grounded, and impactful.

My aspiration is not merely to advance the field in academic or technical terms but to ensure these improvements lead to tangible benefits for society. By driving these changes, I hope to play a part in shaping a future where AI is not only more efficient but also more aligned with the ethical and practical needs of our global community.

How does additional support allow you to more effectively complete your graduate study?

During my Ph.D. study, besides the mentorship from my advisors Prof. Caiwen Ding and Prof. Omer Khan, I received multiple forms of additional support, such as fellowship from the Computer Science Department, CACC, Cigna, Eversource and Student Travel grant from Workshop for Women in Hardware and Security, advanced computational resources from the lab. They are instrumental in enhancing the effectiveness and scope of my graduate research, pushing me to a higher level. Financial assistance alleviates the burden of tuition and living expenses, enabling me to dedicate more time to my studies and research, delving deeply into complex problems and innovating in the field of efficient machine learning. Access to state-of-the-art GPUs allows me to experiment with large-scale models and datasets, conduct extensive experiments and simulations and verify the effectiveness of designs more rapidly. This is particularly crucial in the field which is resource-intensive as deep learning, especially nowadays large language model exploration.

What are you hoping to do upon graduation?

I hope to be an assistant professor after graduation, and I am on the job market this year.

(For papers) What is the major improvement made in this work?  What consequences does this improvement have for the field in general?

The major improvement made in this work is the development of a Structural Linearized Graph Convolutional Network (LinGCN) that optimizes the performance of Homomorphically Encrypted (HE) based GCN inference, reducing multiplication depth and addressing HE computation overhead.

This improvement has significant consequences for the field in general, as it enables the deployment of GCNs in the cloud while preserving data privacy. Additionally, the proposed framework can be applied to other machine learning models besides GCNs, making it a valuable contribution to the field of privacy-preserving machine learning.

Shaoyi Huang

Shaoyi Huang

"Attending this year's conference would significantly enrich my experience, providing more opportunities to engage with experts in the field, strengthen my professional network, and enhance my prospects for a future faculty position. Therefore, receiving support from CACC for this trip would be invaluable to my career, helping me realize my dream''.

Amid increasing demand, CT colleges in arms race to add cybersecurity programs, faculty

Man presenting

With thousands of cybersecurity job openings around the state — and entry-level positions that can command a six-figure starting salary — training the next generation of security engineers is a key challenge for Connecticut.

Colleges around the state say the fast-changing curriculum, difficulty of retaining expert faculty, importance of linking closely to industry, and looming challenge of AI make cybersecurity one of the most dynamic fields in education right now.

Another challenge is the ever-widening circle of people who need to be trained in combating cyberattacks.

Benjamin Fuller

“It’s ​not ​going ​to ​be ​good ​enough ​for ​there ​to ​be 10% ​or ​15% ​of ​computer ​scientists ​who ​fix ​everybody ​else’s ​problems,” said Benjamin Fuller, an associate professor in the computer science department at the University of Connecticut.

Read more at the Hartford Business.com

 

Four From UConn Named Fellows By AAAS

The AAAS is the world’s largest general scientific society.

Four University of Connecticut faculty members have been elected by the American Association for the Advancement of Science (AAAS) to its newest class of fellows. The AAAS is the world’s largest general scientific society and publisher of the Science family of journals.

The four are:

* Bahram Javidi, a professor in the Department of Electrical and Computer Engineering Department in the School of Engineering

* James Magnuson, a professor in the Department of Psychological Sciences in the College of Liberal Arts and Sciences

* Wolfgang Peti, a professor in the Department of Molecular Biology and Biophysics at UConn School of Medicine

* Anthony Vella, a professor and chair of the Department of Immunology at UConn School of Medicine and the Senior Associate Dean for Research Planning and Coordination

Read more

CT officials: Cybersecurity a threat, but also a source of jobs

Kazem Kazerounian, dean of the UConn School of Engineering, discussed combating cyberattacks during a forum Monday with Gov. Ned Lamont. 

HARTFORD — The age of increasing cyberattacks threatens businesses, state infrastructure, government and Connecticut's utilities.

But the current vulnerabilities have also created opportunities to share information and train people to fill an estimated 600,000 future cyber-security jobs across the country, state experts said Monday during a forum at the University of Connecticut School of Business.

"If I was a bad actor, I would think I'd go after the low-hanging fruit" presented by smaller towns in the state, said Gov. Ned Lamont. "I would assume that they would be a little less sophisticated when it comes to cyber protections. I would worry that that's a back door into the Department of Revenue Services or your financial entity, or your utility. I assume this is a really good way to check on those doors that are left ajar and to make sure they're locked. That makes an awful lot of sense to me. Get on-board with these skills. You're going to have to learn these skills. It's an incredibly important skill set to have. There's a guaranteed job."

Read more at The Register Citizen

Cyberattack Continues to Impact ECHN and Waterbury Health: NBC CT News interviews UConn Professor Laurent Michel

Students walking out of the Information Technologies Building during the fall. Oct. 18, 2022. (Sean Flynn/UConn Photo)A systemwide IT outage caused by a cyberattack continues to affect Eastern Connecticut Health Network and Waterbury HEALTH.

Both health networks are owned by Los Angeles-based Prospect Medical, which is experiencing a system-wide outage because of the cyberattack.

ECHN said its hospitals and affiliated providers are continuing to treat patients and its emergency departments are open.

Click view video to watch UConn Professor Laurent Michel's interview with NBC CT News.

View Video @ NBC Connecticut

New NSF CAREER Awardee: Algorithmic and Statistical Modeling of Haplotypes

Congratulations to CSE Assistant Professor Derek Aguiar who was awarded an NSF CAREER award titled “Practical algorithms and high dimensional statistical methods for multimodal haplotype modelling.” This project addresses major challenges in computational biology and applied machine learning by innovating new robust mathematical models that make few assumptions and efficient training algorithms to leverage massive and complex cellular data.

Source: NSF

Massive and diverse datasets have been generated from human cells with the goal of explaining the many ways cellular differences affect the observed differences in traits between people. Mathematical models of the genetic differences between people can be used to explain, for example, why some individuals are predisposed to developing a particular disease. However, most mathematical models make overly simplistic assumptions about how genetic differences interact to influence an observed trait. This project addresses major challenges in computational biology and applied machine learning by innovating new robust mathematical models that make few assumptions and efficient training algorithms to leverage massive and complex cellular data. Specifically, the project considers: (a) methods for computing sequences of genetic differences by integrating different types of data, machine learning, and algorithmic techniques; (b) mathematical models for characterizing the genetic similarity between people; and (c) efficient algorithms that scale to large datasets. The results of this project include new methods that are broadly applicable to clustering massive and diverse sequential data, and specifically helpful for researchers trying to understand how genetic differences affect disease and other traits. Furthermore, the research supports the math and science high school and university communities by developing interactive learning modules and networking resources.

This project develops the statistical and algorithmic foundations for sequences of multimodal variation (i.e., multiomic haplotypes) in two research directions. The first direction introduces the multiomic haplotype data structure and develops new Bayesian nonparametric models and fast inference algorithms for clustering multiomic haplotypes from heterogeneous and high dimensional biomolecular data. Computational tractability is achieved through novel and efficient inference algorithms that operate in data-space (Bayesian coresets), model-space (deep approximations), and algorithm-space (variational approximations). The second direction develops the first model that unifies the combinatorial domain of haplotype assembly with the probabilistic haplotype phasing domain to infer latent haplotypes. The investigator will accomplish this unification goal by combining directed and undirected graphical modeling techniques with efficient particle-based inference algorithms. The completion of these research tasks will result in new methods for developing deep approximations for high dimensional Bayesian nonparametric models, models for multimodal sequential clustering, and methods to accelerate the training of high dimensional statistical models. Additionally, the research addresses (a) the longstanding open problem of haplotype assembly and haplotype phasing unification; and (b) potential sources of missing heritability in association studies: phase-dependent genetic and haplotype-epigenetic interactions. Partnerships with the university and regional high school communities will translate the research findings into educational modules and resources to motivate, engage, and retain computer science students and teachers.

Making AI More Secure with Privacy – Preserving Machine Learning

Congratulations to CSE Assistant Professor Caiwen Ding who, in collaboration with Wujie Wen from Lehigh University and Xiaolin Xu from Northeastern University, was awarded a $1.2M NSF grant for “Accelerating Privacy-Preserving Machine Learning as a Service: From Algorithm to Hardware.” This research project focuses on the design of efficient algorithm-hardware co-optimized solutions to accelerate privacy-preserving machine learning on diverse hardware platforms.

Source: NSF

Machine learning (ML) as a service is being overwhelmingly driven by the ever-increasing clients’ intelligent data processing needs through the use of cloud servers, where powerful ML models are hosted. Although pervasive, out-sourced ML processing poses real threats to personal or business providers’ data privacy. For example, the clients either need to share their sensitive data, such as healthcare records, financial information, with the server, or the server has to disclose the model to the clients. To guarantee privacy, the rise of cryptographic protocols, such as Homomorphic Encryption (HE), Multi-Party Computation (MPC), enable ML analytics directly on the encrypted data. While enticing, there still exists a big gap between the theory and practice, e.g., long latency due to the prohibitively expensive computation or communication overhead over ciphertext. This project aims to practically accelerate the private ML service by offering a full-fledged development of efficient, scalable and encryption-conscious computing paradigms. The project’s novelties lie in new ML-specific cryptographic operators, accuracy-preserving and crypto-friendly neural architectures, and pioneered algorithm-hardware co-design methodologies. The project’s broader significance and importance are: (1) to advance trustworthy artificial intelligence (AI), one of the national strategic pillars of the National AI Initiative; (2) to deepen the understanding of interactions among cryptography, machine learning and hardware acceleration; (3) to enrich the computer engineering curriculum, and the training of students from diverse backgrounds through relevant programs at Lehigh University, Northeastern University, and the University of Connecticut.

The project will develop a multifaceted design paradigm for efficient, scalable and practical algorithm-hardware co-optimized solutions to significantly accelerate privacy-preserving machine learning on hardware platforms such as FPGA. This project consists of three intervening research thrusts: (1) to orchestrate information representation and model sparsity in the encryption domain to fundamentally decrease the memory and computation footprint in the HE inference; (2) to overcome the ultra-high overhead associated with the MPC-based solution through techniques such as encryption-aware model truncation and partial hardware reconfiguration; (3) to search for crypto-friendly and accuracy-preserving neural architectures via jointly optimizing non-linear operation reduction, and closed loop “algorithm-hardware” design space exploration.

Three Faculty Members Promoted

The Connecticut Advanced Computing Center (CACC) is proud to announce the promotions of three faculty. Professors Khan has been promoted to full professor, and Professors Krawec and Miao have been promoted to associate professor with tenure. We would like to extend out congratulations to these highly dedicated individuals who are dedicated to the promotion research, higher education and guiding UConn engineering students.

omer khan

Professor Krawec

Professor Krawec has been promoted to Associate Professor of Computer Science at the University of Connecticut. His primary research interests are in quantum cryptography and quantum information theory. He is very interested in studying the quantum resources required to gain an advantage over a classical protocol in cryptographic applications. His other areas of interest include security, networking, and evolutionary algorithms (especially their use in studying problems in cryptography).

Professor Krawec is always happy to hear from motivated students at all levels looking to get involved in research.

omer khan

Professor Omer Khan

Omer Khan has been promoted to Professor of Electrical and Computer Engineering at the University of Connecticut. He holds the Castleman Term Professorship in Engineering Innovation, and serves as an Associate Director of Connecticut Advanced Computing Center (CACC). Prior to joining UConn, Khan was a Postdoctoral Research Scientist at the Massachusetts Institute of Technology. He received Ph.D. from the University of Massachusetts Amherst. Before joining academia, he designed microprocessors at leading semiconductor companies, Motorola and Intel.

omer khan

Professor Fei Miao

Professor Fei Miao has been promoted to Associate Professor of the Department of Computer Science & Engineering, with courtesy appointment at the Department of Electrical & Computer Engineering, and is also affiliated to Institute for Advanced Systems Engineering, University of Connecticut. Before joining UConn, she was a postdoc researcher at the GRASP Lab and the PRECISE Lab with Professor George J. Pappas and Professor Daniel D. Lee, Department of Electrical and Systems Engineering at the University of Pennsylvania.

Professor Miao received her Ph.D. degree, and the “Charles Hallac and Sarah Keil Wolf Award for Best Doctoral Dissertation” in Electrical and Systems Engineering in 2016, with a dual Master degree of Statistics from Wharton School, from the University of Pennsylvania. She received bachelor’s degree of Science from Shanghai Jiao Tong University (SJTU) in 2010 with a major in Automation and a minor in Finance.