Current projects

Title Description Faculty
Acceleration of Deep Learning for Cloud and Edge Computing

In this project, we explore efficient algorithms and architectures for state-of-the-art deep learning based applications. In the first set of works, we are exploring learning algorithms and acceleration techniques on graph learning algorithms. At their core, they deal with sparse matrix multiplications, so we develop efficient customized accelerators for them. The second work, Caffeine, offers a uniformed framework to accelerate the full stack of convolutional neural networks (CNN), including both convolutional layers and fully-connected layers. Following this work, we further...

Jason Cong
Architecture and Compilation for Quantum Computing

 

Description:

  • Compilation in quantum computing (QC)
  • Optimality study - how far are we from optimal?
  • Optimal quantum layout synthesis
  • Exploring architecture design with layout synthesis
  • Layout synthesis for reconfigruable QC architectures

Compilation in quantum computing (QC)

Quantum computing (QC) has been shown, in theory, to hold huge advantages over classical computing. However, there remains many engineering challenges in the implementation of real-world QC applications. In order to divide-and-...

Jason Cong
Automating High Level Synthesis via Graph-Centric Deep Learning

Domain-specific accelerators (DSAs) have shown to offer significant performance and energy efficiency over general-purpose CPUs to meet the ever increasing performance needs. However, it is well-known that the DSAs in field-programmable gate-arrays (FPGAs) or application specific integrated circuits (ASICs) are hard to design and require deep hardware knowledge to achieve high performance. Although the recent advances in high-level synthesis (HLS) tools made it possible to compile behavioral-level C/C++ programs to FPGA or ASIC designs, one still needs to have extensive experience in...

Jason Cong
Customizable Domain-Specific Computing

http://www.cdsc.ucla.edu

To meet ever-increasing computing needs and overcome power density limitations, the computing industry has entered theera of parallelization, with tens to hundreds of computing cores integrated into a single processor; and hundreds to thousands of computing servers connected in warehouse-scale data centers. However, such highly parallel, general-purposecomputing systems still face serious challenges in terms of performance, energy, heat dissipation, space, and cost. In this project we look beyond...

Jason Cong
Customized Computing for Big-Data Applications

In the era of big data, many applications present siginificant compuational challenges. For example, in the field of bio-infomatics, the computation demand for personalized cancer treatment is prohibitively high for the general-purpose computing technologies, as tumor heterogeneity requires great sequencing depths, structural aberrations are difficult to detect with today’s algorithms, and the tumor has the ability to evolve, meaning the same tumor might be assayed a great many times during the course of treatment.  The goal of this research project is to make apply the domain-...

Jason Cong
Customized Computing for Brain Research and Brain-Inspired Computing

Direction 1: Real-Time Neural Signal Processing for Closed-Loop Neurofeedback Applications.

The miniaturized fluorescence microscope (Miniscope) and the tetrodes assembly are emerging techniques in observing the activity of a large population of neuros in vivo. It opens up new research opportunities for neuroscientific experiments in closed-loop for understanding how brain works. Recent years, the number of simultaneously recorded neurons has witnessed exponential increase. This project aims at creating high performance and energy efficient accelerators to support real-time neural...

Jason Cong
Near Data Computing

In the Big Data era, the volume of data is exploding, putting forward a new challenge to existing computer systems. Traditionally, the computer system is designed to be computing-centric, in which the data from IO devices is transferred and then processed by the CPU. However, this data movement is proven to be very expensive and can no longer be ignored in the Big Data era. To meet the ever-increasing performance needs, we expect the computer system to be redesigned in a data-centric fashion. Different computing engines are deployed in different storage hierarchies, including cache, memory...

Jason Cong
Programming Infrastructure for Heterogeneous Architectures

Heterogeneous computing with extensive use of accelerators, such as FPGAs and GPUs, has shown great promise to bring in orders of magnitude improvement in computing efficiency for a wide range of applications. The latest advances in industry have led to highly integrated heterogeneous hardware platforms, such as the CPU+FPGA multi-chip packages by Intel and the GPU and FPGA enabled AWS cloud by Amazon. However, although these heterogeneous hardware computing platforms are becoming widely available to the industry, they are very difficult to program especially with FPGAs. The use of such...

Jason Cong