Wyner-Ziv coding is a basic and typical source coding for multiple information sources. In Wyner-Ziv coding, the encoder encodes only one of two fixed-length source sequences individually emitted from two correlated sources. Then, the decoder decodes the encoded sequence by referring to the other source sequence as side information, where the decoded sequence is allowed to exhibit distortion from the original sequence. In this Wyner-Ziv coding, it is assumed that the decoder can receive a side information sequence without delay. On the other hand, we previously introduced Wyner-Ziv coding with delayed side information and gave computable upper and lower bounds on the rate-distortion function representing the infimum of the coding rate with respect to a given tolerance level of the distortion. By using these bounds, we showed that there exists a case where the rate-distortion function for a given tolerance level is strictly larger than that for the case without delay. In this paper, we first give known results for lossy source coding without side information or Wyner-Ziv coding. Then, we provide a detailed discussion of our above results. Furthermore, we introduce an information source whose rate-distortion function for a given tolerance level is strictly larger than that for the case without delay and show a numerical example of the rate-distortion function for this source.
The security of public-key cryptography is based on the hardness of some mathematical problems such as the integer factorization problem (IFP) and the discrete logarithm problem (DLP). However, in 1994 Shor proposed a quantum polynomial time algorithm for solving the IFP and DLP, and thus the widely used public-key cryptography (RSA cryptosystem or elliptic curve cryptography) is expected to eventually become vulnerable. From this viewpoint, the American National Security Agency (NSA) announced preliminary plans for transitioning to quantum-resistant algorithms in 2015, and the National Institute of Standards and Technology (NIST) started to standardize post-quantum cryptography (PQC) in 2016. In this article, we give an overview of the recent research on PQC, which will still be secure in the era of quantum computers.
Stochastic computation has recently been studied for soft-error-resilient hardware and approximate computing, such as image processing, machine learning, and deep neural networks. This paper reviews stochastic computation and discusses the advantages and disadvantages along with the recent developments in hardware. In addition, stochastic-computation-based brainware LSIs (BLSIs) for vision information processing are introduced and discussed in terms of energy efficiency.
This tutorial paper explains two subjects: (i) various image processing tasks, such as denoising and color correction, can be resolved by formulating them as optimization problems that incorporate local image features such as regularization and (ii) how to efficiently solve these optimization problems. Specifically, focusing on applications exploiting the virtue of the color-line feature, we elaborate on specific techniques required in optimization for image processing from viewpoints of formulation and the solver.
This article reviews the efficient usage of the set of multicarrier signals using nonorthogonal subcarriers. It is shown that in the orthogonalization process of the signals, this signal set "meets" the so-called Slepian sequences and changes itself into discrete prolate spheroidal wave functions that result in the highest spectral efficiency with a finite time duration. A simple method that alters the Slepian sequences is presented that reveals various interesting characteristics, for example, sharp notches can be embedded into the spectra of the signals, multiple frequency bands can be efficiently used, and the spectra of the signals can be controlled toward given target spectral masks.