Current Research Activities:

Big Data

Due to its overwhelming volume, explosive growth, and content diversity, image/video data constitutes a major part of Big Data, which features volume, velocity, variety, etc. According to a Cisco report, global mobile data traffic reached 885 pertabytes per month in 2012, among which 50% was video traffic; by 2017, the global mobile data traffice would increase 13-fold and the percentage of video traffic would further increase two-third. It is expected that the wild availability of imaging and sensor devices would further fuel the growth of imag/video data.

Among many challengers in the large-scale image/video data processing, two fundamental issues stand out: theoretical modeling for image/video and non-linear data reduction. Since the DCT has been widely applied in image/video processing, deep and accurate understanding of the distribution of DCT coefficients would be useful to quantization desigh, entropy coding, rate control, image understanding and enhancement, etc.. On the other hand, linear and non-linear data reduction is of paramount important to the operability and scalability of large-scale iamge/video data processing systems.

We believe that theoretical modeling for image/video and non-linear data reduction are connected to each other and go hand-in-hand. In this paper, our purpose is to present a unified way to tackle both. Specifically, we propose a new model dubbed transparent composite model (TCM) for transformed image/video data, which first partitions a sequence of DCT coefficients into a tail part and main part. We further propose efficient online algorithms with global exponential convergence to compute the maximum likelyhood (ML) estimates of these parameters. Analysis and experimental results show that for real-valued continuous AC coefficients, the TCM with truncated Laplacian istribution as its parametric distribution (LPTCM) matches up to pure Generalized Gaussian (GG) models in term of modeling accuracy, but with simplicity and practicality similar to those of pure Laplacian models, hence having the best of both pure GG and Laplacian models. Furthermore, it is demonstrated that the LPTCM also exhibits a good capability of nonlinear data reduction. That is, on one hand, data in the heavy tail identified by the LPTCM are truly outliers, and these outliers across all AC frequencies of an image represent an outlier image revealing some unique global features of the image; on the other hand, the outlier image only contains a statistically insignificant part (around 1%) of the original data, thus achieving a dramatic data reduction. This, together with the simplicity of modeling and the fast convergence, makes LPTCM a desirable choice for modeling large-scale image/video data in real-world applications.

[Research Top]

Image Management

Coming soon ....

[Research Top]

Information Theory

Information theory is generally considered to have been founded in 1948 by Claude Shannon in his seminal work, "A Mathematical Theory of Communication.", which is a branch of applied mathematics and electrical engineering involving the quantification of information. Applications of fundamental topics of information theory include lossless data compression, lossy data compression, and channel coding. In multicom research lab, we are focus on the following research areas: Data Compression, Distributed Source Coding, Interactive Encoding and Decoding, and Digital Watermarking and Information Hiding.

[Research Top]

Multimedia Compression

Coming soon ....

Current Researchers: Chang Sun and Nan Hu

[Research Top]

Distributed Source Coding

In conventional multimedia coding algorithms, as standardized by MPEG, the encoder exploits the statistics of the source signal. This principle seems so fundamental that it has been rarely questioned until the recent emergence of wireless sensor networking technology. Wireless sensors, which are usually mission driven and application specific, are expected to operate under severe life energy limit in contrast to many prevailing wireless devises, such as mobile phones, PDAs and laptops, in which energy can be recharged from time to time. This energy constraint of wireless sensors also limits their information processing capability, which gives rise to the research efforts to shift the computation burden at conventional source encoder to decoder. The key technology enabling this shift is distributed source coding (DSC).

In a DSC system, multiple correlated sources are encoded independently, while efficient compression can also be achieved by exploiting source statistics solely at the decoder. Although Slepian and Wolf, Wyner and Ziv established the information-theoretic analysis for distributed lossless coding and lossy coding respectively in the 1970s, it was only in the last few years that practical coding schemes are attempted. Most DSC techniques today are derived from proven channel coding ideas. Specifically, state-of-the-art Slepian-Wolf coding schemes mainly employ sophisticated channel codes, such as turbo codes and LDPC codes, to model the source correlations; and state-of-the-art Wyner-Ziv coding schemes focus on the joint design of quantizer and Slepian-Wolf encoder.

Although channel codes-based approaches are shown to achieve compression performance near theoretic limits for both Slepian-Wolf coding and Wyner-Ziv coding, their code design relies on the correlation model, which is usually unknown at the encoder, or even at the decoder, in practice. Very recently, Yang proposed the initial work on universal DSC. In order to drop the assumption of knowing the correlation model a prior, a feedback channel is constructed from decoder to encoder, so that they could cooperate in string matching based on a shared random database, which is independent of all sources. The compression ratio goes to the theoretical limit as if the encoder knows the correlation model beforehand asymptotically, and the feedback rate goes to 0 asymptotically. Germinated by this work, we are currently investigating issues in the DSC paradigm as follows.

All in all, DSC is a brand new research area and its applications offer a unique opportunity to revisit and extend techniques of conventional source coding under the new paradigm.

Current Researchers: Jin Meng, Lin Zheng

[Research Top]

Interactive Encoding and Decoding

The Communication is interactive in essence. This fact is not only seen in all kinds of communications between people, but also many communication protocols like TCP-IP.

Interactive communication for lossless compression with side information only at encoder was first considered by Orlitsky. In his setup, the decoder with side information Y tries to learn X available at encoder in two-way transmission, where X has to be reconstructed at decoder with no probability of error. Note that the requirement of reconstruction is stricter than Slepian Wolf case, where probability of error goes to 0 asymptotically with block length. Therefore, the rate in this setup is higher than Slepian Wolf case. Meanwhile, the idea of incremental encoding is introduced into asymmetrical SW coding by Feder and Shulman, where they consider the scenario that one common source is broadcast to several receivers with different side information. Coupling incremental encoding with universal fix-rate SW coding scheme proposed by Csiszar and Korner, Draper built a universal SW coding scheme. However, as the universal coding scheme by Csiszar and Korner is only for memoryless source pairs, so is the Draper's scheme.

Recently, the concept of interactive encoding and decoding (IED) was formalized by Professor Yang and Doctor He. A special case of IED for (near) lossless one way learning (or in other words, lossless source coding) with decoder only side information is presented here, where X denotes a finite alphabet source to be learned at the decoder, Y denotes another finite alphabet source that is correlated with X and is only available to the decoder as the side information, and R denotes the average number bits per symbol exchanged between the encoder and the decoder measuring the performance of the IED scheme used. In view of the description, we see that the main difference between IED and non-interactive Slepian-Wolf coding lies in that IED allows the encoder and the decoder to interact until the learning (or source coding) task is accomplished.

Several important results concerning IED for (near) lossless source coding with decoder only side information were established by Professor Yang and Doctor He. Specifically, in comparison to non-interactive Slepian-Wolf coding, it was shown that IED not only delivers better first-order (asymptotic) performance for general stationary, non-ergodic source-side information pairs, but also achieves better second-order performance for memoryless pairs with known statistics. Furthermore, in contrast to the well known fact that universal Slepian-Wolf coding does not exist, it was shown that coupled with classical universal lossless codes, one can build IED schemes that are truly universal in the sense that they are asymptotical optimal with respect to the class of all stationary, ergodic sources-side information pairs.

Inspired by the fundamental result above, a natural question is how to design a practical IED scheme to achieve the performance promised by those results. Moreover, how about lossy data compression cases? Therefore this part of research will focus on:

  1. How to design a practical IED scheme with low complexity of both encoding and decoding, and universal for any statistics of sources?
  2. How to extend the IED scheme to lossy data compression case like Wyner-Ziv problem?

Current Researchers: Jin Meng,

[Research Top]

Copyright 2004-2015 Multicom Research Lab.

Currently Maintained by Nan Hu

Last updated on Feb. 2015

Valid HTML 4.01 Transitional Valid CSS!