As the number of electronic control units (ECUs) or sensors connected to a controller area network (CAN) bus increases, so does the bus load. When a CAN bus is overloaded by a large number of ECUs, both the waiting time and the error probability of the data transmission are increased. Because the duration of the data transmission is proportional to the frame length, it is desirable to reduce the CAN frame length. In this paper, we present an improved CAN data-reduction (DR) algorithm to reduce the amount of data to be transferred in the CAN frame length. We also implement the data reduction algorithm using the CANoe software, and measure the CAN bus load using a CANcaseXL device. Experimental results with a Kia Sorento vehicle indicate that we can obtain additional average compression ratio of 11.15% with the proposed method compared with the ECANDC algorithm. By using the CANoe software, we show that the average message delay is within 0.10ms and the bus load can be reduced by 23.45% with 20 ECUs using the proposed method compared with the uncompressed message.
This paper studies H∞ control for networked control systems with packet loss. In networked control systems, packet loss is one of major weakness because the control performance deteriorates due to packet loss. H∞ control, which is one of robust control, can design a controller to reduce the influence of disturbances acting on the controlled object. This paper proposes an H∞ control design that considers packet loss as a disturbance. Numerical examples show that the proposed H∞ control design can more effectively reduce control performance deterioration due to packet loss than the conventional H∞ control design. In addition, this paper provides control performance comparisons of H∞ control and Linear Quadratic (LQ) control. Numerical examples show that the control performance of the proposed H∞ control design is better than that of the LQ control design.
We present a novel receiver for reliable IoT communications. In this letter, it is assumed that IoT communications are based on ZigBee under frequency-selective indoor environments. The ZigBee includes IEEE 802.15.4 specification for low-power and low-cost communications. The presented receiver fully follows the specification. However, the specification exhibits extremely low performance under frequency-selective environments. Therefore, a channel estimation approach is proposed for reliable communications under frequency-selective fading indoor environments. The estimation method relies on FFT operations, which are usually embedded in cellular phones. We also suggest a correlation method for accurate recovery of original information. The simulation results show that the proposed receiver is very suitable for IoT communications under frequency-selective indoor environments.
Special Section on Mathematical Systems Science and its Applications
Similarity search is an important and fundamental problem, and thus widely used in various fields of computer science including multimedia, computer vision, database, information retrieval, etc. Recently, since loitering behavior often leads to abnormal situations, such as pickpocketing and terrorist attacks, its analysis attracts increasing attention from research communities. In this paper, we present AntiLoiter, a loitering discovery system adopting efficient similarity search on surveillance videos. As we know, most of existing systems for loitering analysis, mainly focus on how to detect or identify loiterers by behavior tracking techniques. However, the difficulties of tracking-based methods are known as that their analysis results are heavily influenced by occlusions, overlaps, and shadows. Moreover, tracking-based methods need to track the human appearance continuously. Therefore, existing methods are not readily applied to real-world surveillance cameras due to the appearance discontinuity of criminal loiterers. To solve this problem, we abandon the tracking method, instead, propose AntiLoiter to efficiently discover loiterers based on their frequent appearance patterns in longtime multiple surveillance videos. In AntiLoiter, we propose a novel data structure Luigi that indexes data using only similarity value returned by a corresponding function (e.g., face matching). Luigi is adopted to perform efficient similarity search to realize loitering discovery. We conducted extensive experiments on both synthetic and real surveillance videos to evaluate the efficiency and efficacy of our approach. The experimental results show that our system can find out loitering candidates correctly and outperforms existing method by 100 times in terms of runtime.
The paper studies controllability of an aggregate demand response system, i.e., the amount of the change of the total electric consumption in response to the change of the electric price, for real-time pricing (RTP). In order to quantify the controllability, this paper defines the controllability index as the lowest occurrence probability of the total electric consumption when the best possible the electric price is chosen. Then the paper formulates the problem which finds the consumer group maximizing the controllability index. The controllability problem becomes hard to solve as the number of consumers increases. To give a solution of the controllability problem, the article approximates the controllability index by the generalized central limit theorem. Using the approximated controllability index, the controllability problem can be reduced to a problem for solving nonlinear equations. Since the number of variables of the equations is independent of the number of consumers, an approximate solution of the controllability problem is obtained by numerically solving the equations.
In this paper, the authors propose an integer linear programming (ILP) model for static multi-car elevator operation problems. Here, “static” means that all information which make the behavior of the elevator system indeterministic is known before scheduling. The proposed model is based on the trip-based ILP model for static single-car elevator operation problems. A trip of an elevator is a one-directional movement of that elevator, which is labaled upward or downward. In the trip-based ILP model, an elevator trajectory is scheduled according to decision variables which determine allocations of trips to users of an elevator system. That model has such an advantage that the difficulty in solving ILP formulations resulted by that model does not depend on the length of the planning horizon nor the height of the considered building, thus is effective when elevator trajectories are simple. Moreover, that model has many variables relevant to elevators' positions. The proposed model is resulted by adding 3 constraints which are basically based on those variables and make it possible to prevent elevators in a same shaft from interfering. The first constraint simply imposes the first and last floors of an upper trip to be above those of its lower trip. The second constraint imagines the crossing point between upper and lower trips and imposes it ahead of or behind the lower trip according to their directions. The last constraint estimates future positions of elevators and imposes the upper trip to be above floors of passengers on the lower trip. The basic validity of the proposed model is displayed by solving 90 problem instances and examining elevator trajectories generated from them, then comparing objective function values of elevator trajectories on a multi-car elevator system with those on single-car elevator systems.
We consider a decentralized similarity control problem for composite nondeterministic discrete event systems, where each subsystem has its own local specification and the entire specification is described as the synchronous composition of local specifications. We present necessary and sufficient conditions for the existence of a complete decentralized supervisor that solves a similarity control problem under the assumption that any locally uncontrollable event is not shared by other subsystems. We also show that the system controlled by the complete decentralized supervisor that consists of maximally permissive local supervisors is bisimilar to the one controlled by the maximally permissive monolithic supervisor under the same assumption.
In this paper, a new method of model predictive control (MPC) for a multi-hop control network (MHCN) is proposed. An MHCN is a control system in which plants and controllers are connected through a multi-hop wireless network. In the proposed method, (i) control inputs and (ii) paths used in transmission of control inputs are computed with constant period by solving the finite-time optimal control problem. First, a mathematical model for expressing an MHCN is proposed. This model is given by a switched linear system, and is compatible with MPC. Next, the finite-time optimal control problem using this model is formulated, and is reduced to a mixed integer quadratic programming problem. Finally, a numerical example is presented to show the effectiveness of the proposed method.
Event-triggered control is a control method that the measured signal is sent to the controller only when a certain triggering condition on the measured signal is satisfied. In this paper, we propose a linear quadratic regulator (LQR) with decentralized triggering conditions. First, a suboptimal solution to the design problem of LQRs with decentralized triggering conditions is derived. A state-feedback gain can be obtained by solving a convex optimization problem with LMI (linear matrix inequality) constraints. Next, the relation between centralized and decentralized triggering conditions is discussed. It is shown that control performance of an LQR with decentralized event-triggering is better than that with centralized event-triggering. Finally, a numerical example is illustrated.
Workflow nets (WF-nets for short) are a standard way to automate business processes. Well-Structured WF-nets (WS WF-nets for short) are an important subclass of WF-nets because they have a well-balanced capability to expression power and analysis power. In this paper, we revealed structural and behavioral properties of WS WF-nets. Our results on structural properties are: (i) There is no EFC but non-FC WF-net in WS WF-nets; (ii) A WS WF-net is sound iff it is a van Hee et al.'s ST-net. Our results on behavioral properties are: (i) Any WS WF-net is safe; (ii) Any WS WF-net is separable; (iii) A necessary and sufficient condition on reachability of sound WS WF-net (N,[pIk]). Finally we illustrated the usefulness of the proposed properties with an application example of analyzing workflow evolution.
This letter presents a method for solving several linear equations in max-plus algebra. The essential part of these equations is reduced to constraint satisfaction problems compatible with mixed integer programming. This method is flexible, compared with optimization methods, and suitable for scheduling of certain discrete event systems.
Limited satellite visibility, multipath and non-line-of-sight signals reduce the performance of the stand-alone Global Navigation Satellite System (GNSS) receiver in urban environments. Embedding 3D model of urban structures in the condition of restricted visibility of the GNSS satellites due to urban canyons may improve position measurement accuracy significantly. State-of-the-art methods use raytracing or rasterization techniques applied on a 3D map to detect satellite visibility. But these techniques are computationally expensive and limit their widespread benefits for mobile and automotive applications. In this paper, a texture-based satellite visibility detection (TBSVD) methodology suitable for mobile and automotive grade Graphical Processing Units is presented. This methodology applies ray marching algorithm on a 2D height map texture of urban structures, and it is proposed as a more efficient alternative to 3D raytracing or rasterization methodology. Real road test in the business district of the metropolitan city is conducted in order to evaluate its performance. TBSVD is implemented in conventional ranging-based GNSS solution and the results illustrate the effectiveness of the proposed approach.
This paper focus on the development of a single portable roadside magnetic sensor for vehicle classification. The magnetic sensor is a kind of anisotropic magnetic device that do not require to be embedded in the roadway-the device is placed next to the roadway and measure traffic in the immediately adjacent lane. A novel feature extraction and comparison approach is presented for vehicle classification with a single magnetic sensor, which is based on four different feature sets extracted from the detected magnetic signal. Furthermore, vehicle classification has been achieved with three common classification algorithms, including support vector machine, k-nearest neighbors and back-propagation neural network. Experimental results have demonstrated that the Peak-Peak feature set with back-propagation neural network approach performs much better than other approaches. Besides, the normalization technology has been proved it does work.
This paper presents the set of procedures to blend GNSS and V2V communication to improve the performance of the stand-alone on-board GNSS receiver and to assure mutual positioning with a bounded error. Particle filter algorithm is applied to enhance mutual positioning of vehicles, and it fuses the information provided by the GNSS receiver, wireless measurements in vehicular environments, odometer, and digital road map data including reachability and zone probabilities. Measurement-based statistical model of relative distance as a function of Time-of-Arrival is experimentally obtained. The number of collaborative vehicles to the mutual positioning procedure is investigated in terms of positioning accuracy and network performance through realistic simulation studies, and the proposed mutual positioning procedure is experimentally evaluated by a fleet of five IEEE 802.11p radio modem equipped vehicles. Collaboration in a VANET improves availability of position measurement and its accuracy up to 40% in comparison with respect to the stand-alone GNSS receiver.
In this paper, the performance of a vehicle information sharing (VIS) system for an intersection collision warning system (ICWS) is analyzed. The on-board unit (OBU) of the ICWS sharing obstacle detection sensor information (ICWS-ODSI) is mounted on a vehicle, and it obtains information about the surrounding vehicles, such as their position and velocity, by its in-vehicle obstacle detection sensors. These information are shared with other vehicles via an intervehicle communication network. In this analysis, a T-junction is assumed as the road environment for the theoretical analysis of the VIS performance in terms of the mean of entire vehicle information acquiring probability (MEVIAP). The MEVIAP on OBU penetration rate indicated that the ICWS-ODSI is superior to the conventional VIS system that only shares its own individual driving information via an intervehicle communication network. Furthermore, the MEVIAP on the sensing range of the ICWS-ODSI is analyzed, and it was found that the ISO15623 sensor used for the forward vehicle collision warning system becomes a candidate for the in-vehicle detection sensor of ICWS-ODSI.
Establishing drivers' trust in the automated driving system is critical to the success of automated vehicles. The focus of this paper is learning what drivers of automated vehicles need to feel confident during braking events. In this study, 10 participants drove a test vehicle and each experienced 24 different deceleration settings. Prior to each drive, it was indicated to each participant what the expected brake starting and stopping positions would be. During each drive, participants maintained a set speed, and then stopped the vehicle when they saw a signal to apply the brakes. After each drive, the participants were asked what their perceived safety level was during the deceleration setting they just experienced. The results revealed that ‘jerk’ movements have significant influence on drivers' perceived safety. For this study, we have named this jerk movement impression jerk (IJ). Using IJ, clearly divides the secure and anxious feelings of the drivers along with individual differences.
Driving safety related innovations received increasing interest from automotive industry. We performed an experiment to observe what situations are related to the secured feelings drivers feel when they drive, and found out that drivers need to have four to seven seconds to react possible collision when they operate onboard Human Machine Interface (HMI) devices and check display devices. We explored the distance of semantic space to see what factors of HMI interaction lead to the secured feeling in that time period, and extracted 32 types of factors that lead to the secured feelings. Furthermore, in the process of investigating the semantic space distance, the indicators relating to the secured feelings obtained in the prior studies were further determined to be ‘The layout of the operation device is the same as the driver's image' and ‘The driver can use the word he uses every day to give instructions’ in this time period.’, which were more concrete factors of the secured feelings.
We have developed Pedestrian-Vehicular Collision Avoidance Support System (P-VCASS) in order to protect pedestrians from traffic accidents and its effectiveness has been verified. P-VCASS is a system that takes into account pedestrian's moving situations. It gives warning to drivers of neighboring vehicles in advance if there is a possibility of collision between vehicles and pedestrians. There are pedestrians to move around. They are dangerous for vehicle drivers because they have high probability of running out into the road suddenly. Hence, we need to take into account the presence of them. In this paper, we propose a new estimation method of pedestrian's running out into road by using pressure sensor and moving record. We show the validity of the proposed system by experiments using a vehicle and a pedestrian terminal in the intersection. As a result, we show that a driver of vehicle is able to detect dangerous pedestrians quickly and accurately.
Driver behavior assessment is a hard task since it involves distinctive interconnected factors of different types. Especially in case of insurance applications, a trade-off between application cost and data accuracy remains a challenge. Data uncertainty and noises make smart-phone or low-cost sensor platforms unreliable. In order to deal with such problems, this paper proposes the combination between the Belief and Fuzzy theories with a two-level fusion based architecture. It enables the propagation of information errors from the lower to the higher level of fusion using the belief and/or the plausibility functions at the decision step. The new developed risk models of the Driver and Environment are based on the accident statistics analysis regarding each significant driving risk parameter. The developed Vehicle risk models are based on the longitudinal and lateral accelerations (G-G diagram) and the velocity to qualify the driving behavior in case of critical events (e.g. Zig-Zag scenario). In case of over-speed and/or accident scenario, the risk is evaluated using our new developed Fuzzy Inference System model based on the Equivalent Energy Speed (EES). The proposed approach and risk models are illustrated by two examples of driving scenarios using the CarSim vehicle simulator. Results have shown the validity of the developed risk models and the coherence with the a-priori risk assessment.
Environment perception is an important task for intelligent vehicles applications. Typically, multiple sensors with different characteristics are employed to perceive the environment. To robustly perceive the environment, the information from the different sensors are often integrated or fused. In this article, we propose to perform the sensor fusion and registration of the LIDAR and stereo camera using the particle swarm optimization algorithm, without the aid of any external calibration objects. The proposed algorithm automatically calibrates the sensors and registers the LIDAR range image with the stereo depth image. The registered LIDAR range image functions as the disparity map for the stereo disparity estimation and results in an effective sensor fusion mechanism. Additionally, we perform the image denoising using the modified non-local means filter on the input image during the stereo disparity estimation to improve the robustness, especially at night time. To evaluate our proposed algorithm, the calibration and registration algorithm is compared with baseline algorithms on multiple datasets acquired with varying illuminations. Compared to the baseline algorithms, we show that our proposed algorithm demonstrates better accuracy. We also demonstrate that integrating the LIDAR range image within the stereo's disparity estimation results in an improved disparity map with significant reduction in the computational complexity.
This paper presents a method to accelerate target recognition processing in advanced driver assistance systems (ADAS). A histogram of oriented gradients (HOG) is an effective descriptor for object recognition in computer vision and image processing. The HOG is expected to replace conventional descriptors, e.g., template-matching, in ADAS. However, the HOG does not consider the occurrences of gradient orientation on objects when localized portions of an image, i.e., a region of interest (ROI), are not set precisely. The size and position of the ROI should be set precisely for each frame in an automotive environment where the target distance changes dynamically. We use radar to determine the size and position of the ROI in a HOG and propose a radar and camera sensor fusion algorithm. Experimental results are discussed.
Special Section on Analog Circuit Techniques and Related Topics
Frequencies around 300GHz offer extremely broad atmospheric transmission window with relatively low losses of up to 10dB/km and can be regarded as the ultimate platform for ultrahigh-speed wireless communications with near-fiber-optic data rates. This paper reviews technical challenges and recent advances in integrated circuits targeted at communications using these and nearby “terahertz (THz)” frequencies. Possible new applications of THz wireless links that are hard to realize by other means are also discussed.
As the scaling of CMOS technology advances, the characteristics of transistors are evolving toward digital circuit design. This means conventional analog design techniques are getting harder to apply to advanced technology, because of the low power supply voltage, narrow dynamic range of switching properties, and low trans-conductance of transistors. Despite such circumstances, analog-to-digital converter (ADC) performance is still advancing, thanks to innovative new architectures. This paper reviews the recent trend of ADCs, exploring their performance as well as use of the time interleave scheme, non-static current amplifiers, and hybrid architectures.
A 12-bit 1.25MS/s cyclic analog-to-digital converter (ADC) is designed and fabricated in 90nm CMOS technology, and only occupies an active area as small as 0.037mm2. The proposed ADC is composed of a non-binary AD convertion stage, and a on-chip non-binary-to-binary digital block includes a built-in radix-value self-estimation scheme. Therefore, althouh a non-binary convertion architechture is adopted, the proposed ADC is the same as other stand-alone binary ADCs. The redundancy of non-binary 1-bit/step architecture relaxes the accuracy requirement on analog components of ADC. As a result, the implementation of analog circuits such as amplifier and comparator becomes simple, and high-density Metal-Oxide-Metal (MOM) capacitors can be used to achieve a small chip area. Furthermore, the novel radix-value self-estimation technique can be realized by only simple logic circuits without any extra analog input, so that the total active area of ADC is dramatically reduced. The prototype ADC achieves a measured peak signal-to-noise-and-distortion-ratio (SNDR) of 62.3dB using a poor DC gain amplifier as low as 45dB and MOM capacitors without any careful layout techniques to improve the capacitor matching. The proposed ADC dissipated 490µW in analog circuits at 1.4V power supply and 1.25Msps (20MHz clocking). The measured DNL is +0.94/-0.71LSB and INL is +1.9/-1.2LSB at 30kHz sinusoidal input.
Power line noise is one of critical problems in bio-sensing. Various approaches utilizing both analog and digital techniques has been proposed. However, these approaches need active circuits with a wide dynamic range. N-path notch filters which implementable using passive components can be a promising solution to this problem. However, the notch depth of a conventional N-path notch filter is limited by the number of path. A new N-path notch filter with additional S/H circuit is proposed. Simulation results show that the proposed topology improves the notch depth by 43dB.
High Efficiency Video Coding (HEVC/H.265) obtains 50% bit rate reduction than H.264/AVC standard with comparable quality at the cost of high computational complexity. Merge mode is one of the most important new features introduced in HEVC's inter prediction. Merge mode and traditional inter mode consume about 90% of the total encoding time. To address this high complexity, this paper utilizes the merge mode to accelerate inter prediction by four strategies. 1) A merge candidate decision is proposed by the sum of absolute transformed difference (SATD) cost. 2) An early merge termination is presented with more than 90% accuracy. 3) Due to the compensation effect of merge candidates, symmetric motion partition (SMP) mode is disabled for non-8×8 coding units (CUs). 4) A fast coding unit filtering strategy is proposed to reduce the number of CUs which need to be fine-processed. Experimental results demonstrate that our fast strategies can achieve 35.4%-58.7% time reduction with 0.68%-1.96% BD-rate increment in RA case. Compared with similar works, the proposed strategies are not only among the best performing in average-case complexity reduction, but also notably outperforming in the worst cases.
HTTP Adaptive Streaming (HAS) has become a popular solution for multimedia delivery nowadays. Because of throughput variations, video quality fluctuates during a streaming session. Therefore, a main challenge in HAS is how to evaluate the overall video quality of a session. In this paper, we explore the impacts of quality values and quality variations in HAS. We propose to use the histogram of segment quality values and the histogram of quality gradients in a session to model the overall video quality. Subjective test results show that the proposed model has very high prediction performance for different videos. Especially, the proposed model provides insights into the influence factors of the overall quality, thus leading to suggestions to improve the quality of streaming video.
The Helmholtz-Kohlrausch (H-K) effect is a phenomenon in which the perceived brightness levels induced by two stimuli are different even when two color stimuli have the same luminance and different chroma in a particular hue. This phenomenon appears on display devices, and the wider the gamut these devices have, the more the perceived brightness is affected by the H-K effect. The quantification of this effect can be expected to be useful for the development and evaluation of a wide range of display devices. However, quantification of the H-K effect would require considerable subjective evaluation experimentation, which would be a major burden. Therefore, the authors have derived perceived brightness maps for natural images using an estimation equation for the H-K effect without experimentation. The results of comparing and analyzing the calculated maps and ground truth maps obtained through subjective evaluation experiments confirm strong correlation coefficients between such maps overall. However, a tendency for the estimation of the calculation map to be poor on high chroma strongly influenced by the H-K effect was also confirmed. In this study, we propose an accuracy improvement method for the estimation of the H-K effect by correcting the calculation maps using a correction coefficient obtained by focusing on this tendency, and we confirm the effectiveness of our method.
This paper proposes image super-resolution techniques with multi-channel convolutional neural networks. In the proposed method, output pixels are classified into K×K groups depending on their coordinates. Those groups are generated from separate channels of a convolutional neural network (CNN). Finally, they are synthesized into a K×K magnified image. This architecture can enlarge images directly without bicubic interpolation. Experimental results of 2×2, 3×3, and 4×4 magnifications have shown that the average PSNR for the proposed method is about 0.2dB higher than that for the conventional SRCNN.
The vast majority of foreground detection methods require heavy hardware optimization to process in real-time standard definition videos. Indeed, those methods process the whole frame for the detection but also for the background modelling part which makes them resource-guzzlers (time, memory, etc.) unable to be applied to Ultra High Definition (UHD) videos. This paper presents a real-time background modelling method called Mixed Block Background Modelling (MBBM). It is a spatio-temporal approach which updates the background model by carefully selecting block by a linear and pseudo-random orders and update the corresponding model's block parts. The two block selection orders make sure that every block will be updated. For foreground detection purposes, the method is combined with a foreground detection designed for UHD videos such as the Adaptive Block-Propagative Background Subtraction method. Experimental results show that the proposed MBBM can process 50min. of 4K UHD videos in less than 6 hours. while other methods are estimated to take from 8 days to more than 21 years. Compared to 10 state-of-the-art foreground detection methods, the proposed MBBM shows the best quality results with an average global quality score of 0.597 (1 being the maximum) on a dataset of 4K UHDTV sequences containing various situation like illumination variation. Finally, the processing time per pixel of the MBBM is the lowest of all compared methods with an average of 3.18×10-8s.
In this paper, a hardware efficient design methodology for a configurable-point multiple-stream pipeline FFT processor is presented. We first compared the memory and arithmetic components of different pipeline FFT architectures, and obtained the conclusion that MDF architecture is more hardware efficient than MDC for the overall processor. Then, in order to reduce the computational complexity, a binary-tree representation was adopted to analyze the decomposition algorithm. Consequently, the coefficient multiplications are minimized among all the decomposition probabilities. In addition, an efficient output reorder circuit was designed for the multiple-stream architecture. An 128∼2048 point 4-stream FFT processor in LTE system was designed in SMIC 55nm technology for evaluation. It owns 1.09mm2 core area with 82.6mW power consumption at 122.88MHz clock frequency.
Powerful jammers are able to disable consumer-grade global navigation satellite system (GNSS) receivers under normal operating conditions. Conventional anti-jamming techniques based on the time-domain are unable to effectively suppress wide-band interference, such as chirp-like jammer. This paper proposes a novel anti-jamming architecture, combining wavelet packet signal analysis with adaptive filtering theory to mitigate chirp interference. Exploiting the excellent time-frequency resolution of wavelet technologies makes it possible to generate a reference chirp signal, which is basically a “de-noised” jamming signal. The reference jamming signal then is fed into an adaptive predictor to function as a refined jamming signal such that it predicts a replica of the jammer from the received signal. The refined chirp signal is then subtracted from the received signal to realize the aim of anti-jamming. Simulation results demonstrate the effectiveness of the proposed method in combating chirp interference in Galileo receivers. We achieved jamming-to-signal power ratio (JSR) of 50dB with an acquisition probability exceeding 90%, which is superior to many anti-jamming techniques based on the time-domain, such as conventional adaptive notch filters. The proposed method was also implemented in an software-defined GPS receiver for further validation.
Controlling synchrony as well as desynchrony in a network of neuronal oscillators has been one of the focus issues in nonlinear science and engineering. It has been well known that spike stimuli injected commonly to multiple neurons can synchronize them if the strength of the common spike stimuli is high enough. Our recent study showed that this common spike-induced synchrony could be suppressed by introducing heterogeneity to inhibitory connections, through which the common spikes are transmitted. The aim of the present study is apply this methodology to electronic neurons as a real physical hardware. Using an Axon-Hillock circuit that represents basic properties of the leaky integrate-and-fire (LIF) neuron, our experiment demonstrated that the method was quite effective for desynchronizing the neuron circuits. The experimental results are also in a good agreement with the linear response theory that describes the input-output relationship of LIF neurons. Our method of suppressing the neuronal synchrony should be of practical use for enhancement of neural information processing as well as for improvement of pathological state of the brain.
An adaptive time-step control method is proposed for the damped pseudo-transient analysis (DPTA) method. The new method is based on the idea of switched evolution/relaxation (SER), which can automatically adapt the step size for different circuit states. Considering the number of iterations needed for the convergence of Newton-Raphson (NR) method and the states in previous steps, the proposed method can automatically optimize the time-step size. Using numerical examples, the new method is proven to improve robustness, simulation efficiency, and the convergence of DPTA for solving nonlinear DC circuit equations.
The maintenance of a system on a ship has limitations when the ship is engaged in a voyage because of limited maintenance resources. When a system fails, it is either repaired instantly on ship with probability p or remains unrepaired during the voyage with probability 1-p owing to the lack of maintenance resources. In the latter case, the system is repaired after the voyage. We propose two management policies for the overhaul interval of an IFR system: one manages the overhaul interval by number of voyages and the other manages it by the total voyage time. Our goal is to determine the optimal policy that ensures the required availability of the system and minimizes the expected cost rate.
Anonymous password-based authentication protocols are designed to provide not only password-based authentication but also client anonymity. In , Qian et al. proposed a simple anonymous password-based authentication protocol (SAPAKE). In this paper, we reconsider the SAPAKE protocol  by first showing that an (third party) active attacker can impersonate the server and compute a session key with probability 1. After giving a formal model that captures such attacks, we propose a simple and secure anonymous password-based authentication (for short, S2APA) protocol that provides security against modification attacks on protocol-specific values and is more efficient than YZWB09/10 ,  and SAPAKE . Also, we prove that the S2APA protocol is AKE-secure against active attacks as well as modification attacks under the computational Diffie-Hellman problem in the random oracle model, and provides unconditional client anonymity against a semi-honest server, who honestly follows the protocol.
Rapid process scaling and the introduction of the multilevel cell (MLC) concept have lowered costs of NAND Flash memories, but also degraded reliability. For this reason, the memories are depending on strong error correcting codes (ECCs), and this has enabled the memories to be used in wide range of storage applications, including solid-state drives (SSDs). Meanwhile, too strong error correcting capability requires excessive decoding complexity and check bits. In NAND Flash memories, cell errors to neighborhood voltage levels are more probable than those to distant levels. Several ECCs reflecting this characteristics, including limited-magnitude ECCs which correct only errors with a certain limited magnitude and low-density parity check (LDPC) codes, have been proposed. However, as most of these ECCs need the multiple bits in a cell for encoding, they cannot be used with multipage programing, a high speed programming method currently employed in the memories. Also, binary ECCs with Gray codes are no longer optimal when multilevel voltage shifts (MVSs) occur. In this paper, an error correction method reflecting the error characteristic is presented. This method detects errors by a binary ECC as a conventional manner, but a nonbinary value or whole the bits in a cell, are subjected to error correction, so as to be corrected into the most probable neighborhood value. The amount of bit error rate (BER) improvement is depending on the probability of the each error magnitude. In case of 2bit/cell, if only errors of magnitude 1 and 2 can occur and the latter occupies 5% of cell errors, acceptable BER is improved by 4%. This is corresponding to extending 2.4% of endurance. This method needs about 15% longer average latency, 19% longer maximum latency, and 15% lower throughput. However, with using the conventional method until the memories' lifetime number of program/erase cycling, and the proposed method after that, BER improvement can be utilized for extending endurance without latency and throughput degradation until the switch of the methods.
It is well known that spatially coupled (SC) codes with erasure-BP decoding have powerful error correcting capability over memoryless erasure channels. However, the decoding performance of SC-codes significantly degrades when they are used over burst erasure channels. In this paper, we propose band splitting permutations (BSP) suitable for (l,r,L) SC-codes. The BSP splits a diagonal band in a base matrix into multiple bands in order to enhance the span of the stopping sets in the base matrix. As theoretical performance guarantees, lower and upper bounds on the maximal burst correctable length of the permuted (l,r,L) SC-codes are presented. Those bounds indicate that the maximal correctable burst ratio of the permuted SC-codes is given by λmax≃1/k where k=r/l. This implies the asymptotic optimality of the permuted SC-codes in terms of burst erasure correction.
In this paper, we propose two secure multiuser multiple-input multiple-output (MIMO) transmission approaches based on interference alignment (IA) in the presence of an eavesdropper. To deal with the information leakage to the eavesdropper as well as the interference signals from undesired transmitters (Txs) at desired receivers (Rxs), our approaches aim to design the transmit precoding and receive subspace matrices to minimize both the total inter-main-link interference and the wiretapped signals (WSs). The first proposed IA scheme focuses on aligning the WSs into proper subspaces while the second one imposes a new structure on the precoding matrices to force the WSs to zero. In each proposed IA scheme, the precoding matrices and the receive subspaces at the legitimate users are alternatively selected to minimize the cost function of a convex optimization problem for every iteration. We provide the feasible conditions and the proofs of convergence for both IA approaches. The simulation results indicate that our two IA approaches outperform the conventional IA algorithm in terms of the average secrecy sum rate.
We propose the node name routing (NNR) strategy for information-centric ad-hoc networks based on the named-node networking (3N). This strategy is especially valuable for use in disaster areas because, when the Internet is out of service during a disaster, our strategy can be used to set up a self-organizing network via cell phones or other terminal devices that have a sharing ability, and it does not rely on a base station (BS) or similar providers. Our proposed strategy can solve the multiple-name problem that has arisen in prior 3N proposals, as well as the dead loop problems in both 3N ad-hoc networks and TCP/IP ad-hoc networks. To evaluate the NNR strategy, it is compared with the optimized link state routing protocol (OLSR) and the dynamic source routing (DSR) strategy. Computer-based comprehensive simulations showed that our NNR proposal exhibits a better performance in this environment when all of the users are moving randomly. We further observed that with a growing number of users, our NNR protocol performs better in terms of packet delivery, routing cost, etc.
Labeling a salient region accurately in video with cluttered background and complex motion condition is still a challenging work. Most existing video salient region detection models mainly extract the stimulus-driven saliency features to detect the salient region in video. They are easily influenced by the cluttered background and complex motion conditions. It may lead to incomplete or wrong detection results. In this paper, we propose a video salient region detection framework by fusing the stimulus-driven saliency features and spatiotemporal consistency cue to improve the performance of detection under these complex conditions. On one hand, stimulus-driven spatial saliency features and temporal saliency features are extracted effectively to derive the initial spatial and temporal salient region map. On the other hand, in order to make use of the spatiotemporal consistency cue, an effective spatiotemporal consistency optimization model is presented. We use this model optimize the initial spatial and temporal salient region map. Then the superpixel-level spatiotemporal salient region map is derived by optimizing the initial spatiotemporal salient region map. Finally, the pixel-level spatiotemporal salient region map is derived by solving a self-defined energy model. Experimental results on the challenging video datasets demonstrate that the proposed video salient region detection framework outperforms state-of-the-art methods.
Advances in intelligent vehicle systems have led to modern automobiles being able to aid drivers with tasks such as lane following and automatic braking. Such automated driving tasks increasingly require reliable ego-localization. Although there is a large number of sensors that can be employed for this purpose, the use of a single camera still remains one of the most appealing, but also one of the most challenging. GPS localization in urban environments may not be reliable enough for automated driving systems, and various combinations of range sensors and inertial navigation systems are often too complex and expensive for a consumer setup. Therefore accurate localization with a single camera is a desirable goal. In this paper we propose a method for vehicle localization using images captured from a single vehicle-mounted camera and a pre-constructed database. Image feature points are extracted, but the calculation of camera poses is not required — instead we make use of the feature points' scale. For image feature-based localization methods, matching of many features against candidate database images is time consuming, and database sizes can become large. Therefore, here we propose a method that constructs a database with pre-matched features of known good scale stability. This limits the number of unused and incorrectly matched features, and allows recording of the database scales into “tracklets”. These “Feature scale tracklets” are used for fast image match voting based on scale comparison with corresponding query image features. This process reduces the number of image-to-image matching iterations that need to be performed while improving the localization stability. We also present an analysis of the system performance using a dataset with high accuracy ground truth. We demonstrate robust vehicle positioning even in challenging lane change and real traffic situations.
In this letter, we explore joint optimization of perceptual gain function and deep neural networks (DNNs) for a single-channel speech enhancement task. A DNN architecture is proposed which incorporates the masking properties of the human auditory system to make the residual noise inaudible. This new DNN architecture directly trains a perceptual gain function which is used to estimate the magnitude spectrum of clean speech from noisy speech features. Experimental results demonstrate that the proposed speech enhancement approach can achieve significant improvements over the baselines when tested with TIMIT sentences corrupted by various types of noise, no matter whether the noise conditions are included in the training set or not.
In this letter, we propose a novel speech separation method based on perceptual weighted deep recurrent neural network (DRNN) which incorporate the masking properties of the human auditory system. In supervised training stage, we firstly utilize the clean label speech of two different speakers to calculate two perceptual weighting matrices. Then, the obtained different perceptual weighting matrices are utilized to adjust the mean squared error between the network outputs and the reference features of both the two clean speech so that the two different speech can mask each other. Experimental results on TSP speech corpus demonstrate that the proposed speech separation approach can achieve significant improvements over the state-of-the-art methods when tested with different mixing cases.
In this letter, we propose an algorithm for the 2-dimensional (2D) direction of arrival (DOA) estimation of noncircular coherently distributed (CD) sources using the centrosymmetric array. For a centrosymmetric array, we prove that the angular signal distributed weight (ASDW) vector of the CD source has a symmetric structure. To estimate azimuth and elevation angle, we perform a 2D searching based on generalized ESPRIT algorithm. The significant superiority of the proposed algorithm is that, the 2D central directions of CD sources can be found independently of deterministic angular distributed function (DADF). Simulations results verify the efficacy of the proposed algorithm.