Since Internet of Things (IoT) has been widely used, embedded devices have the risk of illegal attacks. Therefore, lightweight block ciphers, which can be implemented on embedded devices in small area, have been attracted attention as the countermeasure. Simeck is a new lightweight block cipher which can be implemented in the smallest area among lightweight block ciphers. Recently, regarding the security of a cryptographic circuit, the risk of electromagnetic analysis attack has been reported. However, no study of electromagnetic analysis attack for Simeck has been reported. Therefore, this study proposes a new electromagnetic analysis attack for a lightweight block cipher Simeck. The proposed method performs the analysis using double rounds. Moreover, the proposed method performs build-in processing using already known information and realizes the high attack accuracy. To our knowledge, this is the first attack for Simeck. Experiments using an actual device introduce the vulnerability of Simeck and the validity of the proposed method.
Smart cards, such as credit cards and cash cards, protect confidential information using cryptographic circuits. Since cryptographic circuits protect confidential information, various attacks often target cryptographic circuits. To attack cryptographic circuits, a method called side-channel attacks has been reported. Side-channel attacks reveal cipher keys by intentionally mixing faults in a cryptographic circuit or by measuring the power consumption of a cryptographic circuit during its operation. A method using a back-check system was reported as a typical countermeasure of the advanced encryption standard (AES) against a fault analysis attack which uses a cryptogram with faults and a correct cryptogram. This study proposes a new power analysis method against a countermeasure with the back-check system. The proposed method utilizes the circuit structure of back-check system for the analysis. To our knowledge, this is the first power analysis attack against countermeasure of fault analysis attack. Experiments using a LSI prove the validity of the proposed method.
This paper proposes a novel TCP congestion control based on Quality of Experience for Web services (WebQoE). The proposed method controls congestion window so that a fluctuation of QoS becomes low. In this method, a variance of RTT is estimated by using the fluctuation of QoS. The authors implement the method and QoS evaluation. The experimental results show that the proposed method can suppress the fluctuation of RTT more than TCP cubic and TCP New reno and confirms the effectiveness of the method.
Delay Tolerant Networking (DTN) system provides communication services under the challenging network condition where a communications infrastructure became unusable. In recent years, DTN systems which forward a message by mobile phone users, has attracted attention. In this DTN systems, mobile phone users forward a messages by combining users' physical movement and mobile phone's wireless network technologies. However, mobile phone users' social behavior denial users' communicate and influences the performance of DTN. Now, social behavior is mobile phone users' behavior that denial required communication with other users in order to save own storage/power resources or avoid risk of communications. In this study, we model the DTN system considering users' social behavior, and evaluate the message delivery delay and message delivery rate of the system.
If there exists a stable feedback controller which can stabilize the feedback control system of a plant, this plant is said to be strongly stabilizable and this stable feedback controller is called strongly stabilizing controller. In this paper, a synthesis methodology of the strongly stabilizing controller based on the minimal-order dual observer is studied. The basic idea of our approach for synthesis of the strongly stabilizing controller is to formulate the condition for simultaneous stabilizability of the controller and the feedback control system by using linear matrix inequalities, taking advantage of features of dual observer which can freely assign the closed-loop poles resulting from the output injection. Experimental results confirm the effectiveness of the synthesis methodology.
In this study, we evaluated undersea wireless power transfer (WPT). The characteristics of the undersea WPT are clarified by experiment. A vector network analyzer which obtains scattering parameters (S-parameters) is used in the experiment. Over 10% transmitting efficiency is obtained over a distance of 300 mm which is four times the transmit and receive coil diameter, when the coils are put in individual waterproof sealed cases undersea.
This paper proposes a new regularization term of non-negative matrix factorization (NMF) for estimating pitch and onset time of electric bass. Both of them are indispensable information for automatic music transcription. However, it is difficult to estimate them from the electric bass performance because of trade-off problem between time resolution and frequency resolution. Applying the NMF simply to the electric bass performance, the correct note and its semitone are sometimes detected simultaneously. The proposed regularization term avoiding such miss detection based on the performance characteristic of the electric bass, that is, two notes are not generally played simultaneously. We define the regularization term in order to keep orthogonality of the adjacent activation decomposed by the NMF. As a result of the experiments using MIDI synthesizer-generated sound of electric bass and real played electric bass sound, it is confirmed that the F-measure of the proposed method is higher than that of comparative method.
The authors have conducted studies on Arabic telop recognition to develop a system for video retrieval by keyword to index and edit Arabic broadcast programs received daily and stored in a big database. This paper describes a dedicated OCR for recognizing low resolution telop in video images. A telop recognition system consisting of text line extraction, word segmentation and segmentation-recognition of words is developed and the performance was experimentally evaluated using datasets of frame images extracted from AlJazeera broadcasting programs. Character recognition of moving telop is difficult due to combing noise caused by the interlacing of scan-lines. A technique to detect and eliminate the combing noise to correctly recognize the moving telop is proposed. This paper also proposes a technique based on insertion operation with minimum edit distance between successive two telops to connect them. The method to connect the moving telops is necessary for automatic language translation. The proposed method using edit distance for bi-gram sequence of telops (Method-B) is shown to be robust to recognition error of characters and successfully connect the telops.
Active learning has been focused as an active study method in which a student discovers a problem and actively finds the solution. Cooperative learning, in which students work cooperatively in small groups, is one methods of active learning. Working together in a small group to accomplish shared goals is an effective approach to mutual learning and the development of good friendships. In recent years, group composition methods for cooperative learning have drawn the attention of researchers. Although most grouping methods need a preliminary survey to discover information about students, in cases where many students take a lecture at the same time, such as at a university, it is difficult for teachers to know a student's character and ability in advance. In this paper, we propose new grouping strategies for cooperative learning not requiring special surveys. Our proposed methods use a friendship network for students as prior information. We can easily obtain information about a student's friendship network by estimating their friendship from lecture attendance data. We apply our proposed methods to actual cooperative learning activities and observe their effects. Questionnaire results show that satisfaction with the lecture of the students grouped by the proposed methods were higher than that of the students grouped by the conventional method. An analysis of the resulting friendship networks show that students grouped by the proposed methods made more friends than students grouped by a conventional method.
This paper describes a space segmentation method (SSM)-based algorithm for enumerating all intersections of given line segments on a bounded plane, which is a classical problem in computational geometry. Here, SSM is an algorithm for finding all combinations of input data that generate points satisfying some conditions for a particular space. Assuming that the minimum distance between these points is bounded, the algorithm finds the combinations by recursively segmenting the space. Defining a hierarchical mesh system, we designed an efficient and simple algorithm for the line segment intersection enumeration problem. Although we have been unable to estimate the computational complexity of the algorithm, the performance on random inputs of a program implementing the algorithm was extremely high. It was also faster than an implementation of a well-known algorithm by Bentley and Ottmann(4). Moreover, as our algorithm is quite simple, it can easily be rewritten as a parallel program. In a twelve-threaded environment, such a program ran approximately 2.6 times faster than a single-threaded version of the program.
This paper shows a new denoising algorithm based on the outer product expansion with lower norms for the background noise. We have proposed novel source separation methods using the outer product expansion with L1 norm minimization. Effectiveness of outer product expansions for artificial signals and an electromagnetic wave data was represented. However, the denoising performance is decreasing with an increasing of local signals. In this paper, we propose the outer product expansion with lower norms (0.1 ∼ 0.9). Simulation results show that the proposed method produces the accurate background noise reduction.
SAT problem, also known as a three conjunctive normal form (3-CNF), is an expression where every clause consists of exactly three literals but with the unrestricted number of literals and clauses. The 3-SAT problem is known to be an NP-hard and very difficult to solve. Where to solve the 3-SAT is to find an assignment of true or false to each of the literals in the clauses such that 3-SAT expression is evaluated to true. This study implemented the island model genetic algorithm (Island-based GA) to solve the 3-SAT problem. Hence, the solution involves the novel use of Island-based GA to improve the performance of solving 3-SAT problem. The benchmark SAT problems of four suits (URSAT1, URSAT2, URSAT3, and URSAT4) in SATLIB were used to test the performance of the Island-based GA and were compared with MAEA-SAT and Standard GA (SGA). The Island-based GA obtained good results and performance in solving large-scale SAT problems.
Smart homes have been vigorously explored for nearly a couple of decades to introduce the concept of networking devices and appliances in the house. In this paper, we present a concept and standardization of smart homes in the ubiquitous system perspective. We introduce the implementation of smart homes, i.e., iHouse, which uses the ECHONET Lite standard specification. The paper also gives overall discussion on a cloud-based smart home platform and its service categories and research challenges.
In Evolutionary Algorithms (EA), the selection scheme is a pivotal component, where it relies on the fitness value of individuals to apply the Darwinian principle of survival of the fittest. In Particle Swarm Optimization (PSO) there is only one place employed the idea of selection scheme in global best operator in which the components of best solution have been selected in the process of deriving the search and used them in generation the upcoming solutions. However, this selection process might be affecting the diversity aspect of PSO since the search infer into the best solution rather than the whole search. In this paper, new selection schemes which replace the global best selection schemes are investigated, comprising fitness-proportional, tournament, linear rank and exponential rank. The proposed selection schemes are individually altered and incorporated in the process of PSO and each adoption is realized as a new PSO variation. The performance of the proposed PSO variations is evaluated. The experimental results using benchmark functions show that the selection schemes directly affect the performance of PSO algorithm. Finally, a parameter sensitivity analysis of the new PSO variations is analyzed.
The goal of this study is to develop an exploratory search system for sound effects (SEs). SEs appreciably influence viewers' impressions of a scene in a movie; thus, SE editors must skillfully select the most appropriate SEs for each scene. Existing SE search methods, however, have three main difficulties involving the diversity of SE purposes, the representation of sound using text, and the conceptualization of SEs for a given scene. These difficulties lead to inefficient SE searches because the SE editor is forced to perform repeated searches. To solve the problem, this paper proposes a framework for SE exploration under multiple perspectives. In the framework, similarities among SEs are provided to the searcher as clues for exploration. The similarities are defined by three types of features: context, acoustic features, and symbol of onomatopoeia. This paper presents the details of the framework, a system developed with the framework, function and interaction provided by the system, and the results of user observation with the system.
In modern society, innovation, which derives from potential demand and knowledge creation, is one requirement for business success. Potential demand might be found in detailed market research or Big Data because the streets are full of information. However, knowledge creation does not occur incidentally. It also arises from a fusion of knowledge and skill. A methodical system of interpretation for knowledge creation is the i-system. Many applications of knowledge creation are explained by i-systems. However, an i-system alone cannot achieve knowledge creation. The authors have developed Business and Accounting School for Entrepreneurs business games (BASE business games): a participation-type education technique using analogue games. As described herein, the authors attempt to interpret teaching methods with BASE business games using an i-system. Then the authors used “Supply Chain Collaboration Game (SCC game)” and “Supply Chain Collaboration 2 Game (SCC2 game)”, BASE business games, at the School of Management Technology (MT) of Sirindhorn International Institute of Technology (SIIT), Thammasat University in 2014 and 2015. To clarify important factors underlying knowledge creation, the authors attempted to consider comparison of the answer of questionnaire for students from the knowledge creation perspective. Results show that this teaching method unites their knowledge into meaningful experiences and facilitates understanding of knowledge creation as an experience. Furthermore, results confirmed that communication between students is an important factor underlying knowledge creation.
From the lens of traditional education, teachers teach students and students learn from teachers. The relationship between teachers and students is hierarchical in goods dominant logic. However, according to service dominant logic, in which all economies are service economies, and education is not excluded. With the lens of service logic, professors as teachers in higher education and students are both giver and receiver, and they co-create value to each other. Accordingly, the role of students is dominant in co-creation relationship. In the co-creation process, the goals are significant factors because they are related to motivation of studying. In this work, we focus on the analysis of the co-creation between professors and adult students in an Innovation Management of Service and Technology course (iMOST) in a Japan Advanced Institute of Science and Technology (JAIST), based on the goals to co-create and the value co-creation of both sides. By identifying the important goals and relationship between the goals and value co-creation, graduate institutes, schools, and faculties could enhance the value co-creation with students.
This research proposes a new methodology of maximizing service values in business co-creation process. We already proposed KIKI model as a methodology for business co-creation process. However, KIKI model does not consider service value numerically created by co-creation process. In order to improve KIKI model, the service value is caluculated by the inner product of a user's service attribute vector and a provider's service attribute vector in this paper. Then, we apply this methodology to KIKI model. The proposed methodology can proceed the co-creation process maximining service value. In order to show the effectiveness of the proposed methodology, we analyze the production equipment services for saving energy or generating energy. The service value co-creation process can be described clearly and executed effectively.
This paper is a preliminary study on Open Source Software (OSS) implementation in the Malaysian Public Sector. The objective of the study is to explore the state of OSS implementation among government agencies since the launch of the Malaysian Public Sector OSS Master Plan by the Government of Malaysia on July 16, 2004. Semi-structured face-to-face interviews using open-ended questions were conducted in April 2015 with ICT managers/ICT officers among the selected six government agencies in the Northern Region of Peninsular Malaysia. This study aims at investigating the usage of OSS and proprietary software, the level of OSS utilisation, the level of OSS knowledge and training of ICT and non-ICT staff, software development and acquisition model, internal OSS manpower capabilities and skills, user's perception on the advantages of OSS, user's perception on the risks of OSS, and the problems or barriers in OSS implementation. The results of interviews show that there are many problems or barriers in OSS implementation e.g. lack of internal OSS expertise, lack of OSS policy, and lack of top management support.
For the system providing server SSD (Solid State Drive) cache and volume tiering functions, it is difficult to determine which data to be placed into SSD tier on the basis of the number of I/O (Input/Output). In this paper, we propose a volume tiering method cooperating with the server SSD cache, which counts the number of I/O on the server and allocates storage SSD to read-intensive areas. We conduct an I/O simulation experiment using I/O log traced on the real environment to validate the proposed method. We show that the proposed method reduces I/O average response time by up to 10% compared to the existing method through the I/O simulation experiment.
The ability to correctly understand the context and deliver the exact solution for a critical issue as expected by an agency's stakeholders has been a long-standing problem. This research is motivated to assist the stakeholders to better understand a critical issue by looking into the contextualisation aspects. The proposed contextualisation approach comprises three processes namely the context characterisation, context representation, and context interpretation. At the end of a contextualisation process cycle, a set of consistent triad relationships would be derived to represent the current context of a critical issue. From an experiment that has been conducted with a stakeholder, this proposed approach has proven to be able to help the stakeholders to determine the right context to understand a critical issue. Besides, a case study has also been conducted to prove the ease of use, ease of usefulness, and stakeholder's intention to use of the proposed approach. It is believed that once the proposed approach is conducted in many contextualisation process cycles with the help of machine learning systems and advance analytics tools, it could produce a useful set of contexts for any critical issue in making better decision deliberations and insights.
In this paper, we propose a method for detecting cooking motion from video and 3D information. Firstly we try to extract certain feature areas which are important for detecting cooking motion from video, such as cooking materials and arms of cooks. In the second step, cooking operation area can be detected, and finally we identify cooking motions such as ‘mixing', ‘heating' and ‘cutting'. The experimental results show the ability of our proposed method.
We fabricated a p-ZnTe/n-ZnO heterojunction structure by a direct bonding technology. The surfaces of the p-ZnTe and n-ZnO substrates were activated by low-energy argon-ion bombardment under high vacuum, with keeping their roughness. Continuously, they brought into contact under controlled pressure at room temperature, and annealed in argon atmosphere as post process. Atomic-scale bonding was confirmed by transmission electron microscopy and scanning energy-dispersive X-ray spectroscopy. It was also confirmed that the electric rectifying characteristics depend on the bonding press time and the post anneal temperature.
A ΔΣDA modulator suffers from a limit cycle problem when its input amplitude is very small, and here we propose a digital dither method to solve this problem. It uses an XOR gate at the modulator output; one of its input is the comparator output (digital integrator MSB) and the other output is the digital dither generated by another ΔΣ digital modulator. Our simulation results with MATLAB and RTL have verified the effectiveness of the proposed method.