Data visualization is an important technique for interpreting complex data and finding relationships between data. Although principal component analysis is widely known as a method of data visualization, the Self-Organizing Map (SOM) proposed by Kohonen, one of the artificial neural networks, is also widely used as a data visualization tool. The SOM performs a topology-preserving transformation from a higher-dimensional vector space to a lower one, and generates a map that represents the relationships between data vectors. In some cases, however, it is necessary to generate maps based on the similarity between models that generate the data vectors. The author proposed a modular network self-organizing map (mnSOM: Modular Network SOM) as a method to solve such problems. The mnSOM has an architecture as a generalized SOM since the mnSOM can generate maps for a variety of models such as input-output functions, dynamical models, manifolds and so on. In this paper, the theory and learning algorithms of mnSOM are explained with an explanation of SOM.
Current CPUs have a Trusted Execution Environment (TEE) mechanism to run a critical process in isolatation from the operating system. Famous TEEs are Intel SGX, AMD SEV, and Arm TrustZone. In addition, the open architecture “RISC-V” has some proposals for TEE implementation. Unfortunately, TEE functions depend on CPU implementation. The common function of a TEE is isolated execution only, which requires supporting technologies for secure processing. In this paper, the details of each TEE implementation as well as its security-supporting technologies, i.e., Root of Trust for critical information and Remote Attestation for verifying CPU and code integrity, are discussed. The software build environment, vulnerability, and standardization activities are also introduced.
The role of narrative is not small when it comes to creating societal values. Narratives can form societal values and have the power to change them. In this paper, arguement in favor of structural analysis for evaluating the appeal of a narrative structure is presented. The paper begins with an explanation of the basic theories in narratology and narrative analysis. Next, a general view of the research history of narratology and narrative analysis is presented. The paper ends with an overview of the principal methods used in narrative analysis, including a brief explanation of each method and illustrative examples.
Hyperspectral (HS) imagery provides 3D data that contains both spatial (two-dimensional) and spectral (one-dimensional) information acquired by spectroscopic imaging in a wide range of wavelengths from ultraviolet to near-infrared. HS images can visualize physical properties and phenomena that cannot be captured by the human eye or RGB cameras. However, capturing complete spatial―spectral information is often difficult because of the limitations of the optical design and/or measurement conditions. In addition, it is also difficult to avoid degradation due to various types of noise arising in the measurement process. In this paper, we review optimization-based techniques for effectively estimating the desired HS image from such incomplete and degraded measurement data.