School of Electrical and Electronic Engineering The University of Adelaide Australia

Dr Cheng-Chew Lim

Current Projects

Matrix Processor Research

MatRISC - A high performance matrix and tensor processor

MatRISC is a novel parallel array architecture optimised to perform high speed matrix based computations. Computer systems that implement signal processing and numerical algorithms often have a large degree of reuse and interaction of operands due to the occurrence of matrix operators. These algorithms can be implemented on a highly integrated parallel computer to achieve high efficiency and throughput.

This project investigates a highly integrated parallel processor which can be used as a co-processor in a workstation environment or stand-alone to provide real-time system support.

A prototype processor is being implemented in CMOS and contains 16 processing elements with distributed memory arranged in a 4 x 4 mesh with four fused multiply-add units per processing element and operates on double-precision floating-point data. The processor can operate in synchronous MIMD mode but the usual mode of operation is synchronous SIMD where each processing node executes statically compiled code. The peak performance of the processor is 26 GFLOPS.

Other work has included architecture studies, design and fabrication of floating point units, SRAMs and clock synchronisation circuits, compiler and assembler development and investigation and mapping of algorithms to the processor such as the Volterra model, FFT, Kalman filter, SVD and support vector machines.


Dr Cheng-Chew Lim, Dr Warren Marwood (DSTO), Mr Michael Liebelt, Dr Andrew Beaumont-Smith, Mr Kiet To (Ph.D. candidate), Mr Adam Burdeniuk (Ph.D. candidate)


The Australian Research Council
The Defence Science and Technology Organisation


Array processor architectures for machine learning

Support vector machine (SVM) is a learning algorithm. Its major use has been for supervised classification. In order to obtain good generalisation in practical applications, an enormous number of examples for training are required. This places huge computational demand on conventional hardware.

Since the formulation of SVM is inherently a dense matrix quadratic optimisation problem, we can extend our earlier work on high performance parallel matrix processors to devise parallel processing architectures and algorithms for running machine learning algorithms at computing rates that are orders of magnitude faster than is currently feasible.

These architectures and algorithms are applicable to problems such as automatic channel classification in wideband communications systems, and feature recognition in radar images.


Dr Cheng-Chew Lim, Mr Kiet To (Ph.D. candidate), Mr Hong Gunn Chew (Ph.D. candidate), Prof Robert Bogner


The Australian Research Council
Lucent Technology (USA)
The Defence Science and Technology Organisation