This review provides a summary of the basic theory of scale modeling and some example applications. The article is written to introduce principles and the thinking process unique to scale modeling but not to provide a thorough literature review of relevant articles. Five different examples of scale modeling application to vibro-acoustic problems are given, including architectural acoustics and modeling a concert hall, acoustic streaming jets, acoustic reciprocity, vibration casting, and reduction of NOx by acoustic wave. The scaling laws for the first three examples reported there are introduced in this article based on the law approach, while for the last two examples of scaling laws were developed in this article.
In this paper, we used blood oxygenation level-dependent (BOLD) signals and emotion ratings to investigate the relationship between the type and strength of emotion induced by musical stimuli. Our goal was to establish a quantitative emotional evaluation method that uses brain activity. In Experiment 1, 26 participants rated 60 pieces of music using a semantic differential scale and 20 pieces were chosen on the basis of Russell's circumplex model. In Experiment 2, we investigated the relationships between the type and strength of emotion and brain activity by asking 20 participants to listen to the pieces of music in a Magnetic Resonance Imaging (MRI) scanner. We identified brain regions for which the BOLD signal intensity was correlated with the ratings of emotions. As a result, the ``Happy'' rating was mainly correlated with activity in the superior temporal gyrus. The ``Sad'' rating was correlated with activity in the left thalamus. The ``Fear'' rating was mainly correlated with activity in the parahippocampal gyrus, insular cortex, and right amygdala. By focusing on activity in these brain regions, it may be possible to quantify the type and strength of emotions evoked by music.
Many acoustical simulation methods have been studied to investigate acoustical phenomena. Modeling of the directivity pattern of a sound source is also important for obtaining realistic simulation results. However, there has been little research on this. Although there has been research on sound source identification, the results might not be in a suitable form for numerical simulation. In this paper, a method for modeling a sound source from measured data is proposed. It utilizes the sum of monopoles as the physical model, and the modeling is achieved by estimating the model parameters. The estimation method is formulated as a convex optimization problem by assuming the smoothness of a solution and the sparseness of parameters. Moreover, an algorithm based on the alternating direction method of multipliers (ADMM) for solving the problem is derived. The validity of the method is evaluated using simulated data, and the modeling result for an actual loudspeaker is shown.
In order to explore the articulatory nature of contrastive emphasis, this study compares contrastively emphasized and non-emphasized syllables in terms of mandible position and F0 peaks. The stimuli were English mono-syllabic words with /ai/, spoken in short utterances as part of read dialogues. Articulatory and acoustic data obtained by the University of Wisconsin x-ray microbeam facilities from six American English speakers were analyzed. The results show that for emphasized syllables, the jaw is lower and generally more front, and F0 is higher, compared to non-emphasized syllables. In addition to corroborating previous observations about larger jaw opening and higher F0 for emphasized syllables, our new finding is protrusion of the jaw in emphasized syllables. A possible hypothesis that we entertain in this paper is that fronting of the jaw may allow large jaw opening with high F0 target. We offer a tentative, yet concrete, hypothesis about the biomechanical interaction between F0 control and jaw opening mediated by anatomical connections between the jaw and the larynx.
To realize physically accurate sound field reproduction, the boundary surface of a sound field to be reproduced should be spatially discretized with intervals smaller than a half wavelength. Otherwise, spatial aliasing will occur in the reproduced field, which leads to low physical reproducibility. Therefore, accurate sound field reproduction covering the full audible range up to approximately 20 kHz requires an impractically large number of sampling points, namely, the number of microphones and loudspeakers. However, it may be possible to reduce the number of sampling points if the degradation in the physical performance due to spatial aliasing does not degrade the spatial perception of the reproduced sound field. To achieve such perceptual optimization of a sound field reproduction system, it should be clarified how spatial aliasing has negative effects on the physical and perceptual reproducibility of a sound field reproduction system. Therefore, as a first step, to investigate the physical reproducibility of sound field reproduction with spatial aliasing, we numerically simulate the reproduced sound field and binaural signals that will be reproduced when a listener is inside the reproduced sound field. The numerical results of the reproduced sound field with spatial aliasing showed that sampling intervals larger than a half wavelength yield unnecessary wave fronts that reach a listener 1 ms after the main wave fronts. Furthermore, the results of the numerical simulation of a binaural signal, employing the boundary element simulation of a human head, suggested that interaural time differences and level differences are approximately reproduced when the upper-bound frequency of physically accurate reproduction is greater than 4 and 8 kHz, respectively.
When we listen to sounds radiated from a single sound source, the sound image is generally localized at a certain spatial position. In addition, similar to a visual image, the sound image has a certain largeness and shape. Previous works reported that a center frequency of the broadband noise affects the width of sound image; a bandwidth of broadband noise also affects the largeness of sound image. Although these works suggest that either the center frequency or frequency bandwidth has an impact individually on the largeness of sound image, an experiment employing simultaneous control of both parameters are required to explore how spectral characteristics of sound source signal affects the largeness of sound image. Furthermore, very little is known about the perception of the shape of sound image that would be essential for comprehensive understandings of spatial auditory perception. Therefore, in this work, a sketch-drawing experiment was conducted to capture the largeness and shape of sound image for a single sound source in an anechoic environment. The experimental results reveal that both the lower center frequency and the broader bandwidth of the broadband noise lead to the larger sound image. Moreover, the results show that inter- and intra-individual variations in the shape of sound image including circle, ellipse, and rectangular-like shape.
July 31, 2017 Due to the end of the Yahoo!JAPAN OpenID service, My J-STAGE will end the support of the following sign-in services with OpenID on August 26, 2017: -Sign-in with Yahoo!JAPAN ID -Sign-in with livedoor ID * After that, please sign-in with My J-STAGE ID.
July 03, 2017 There had been a service stop from Jul 2‚ 2017‚ 8:06 to Jul 2‚ 2017‚ 19:12(JST) (Jul 1‚ 2017‚ 23:06 to Jul 2‚ 2017‚ 10:12(UTC)) . The service has been back to normal.We apologize for any inconvenience this may cause you.
May 18, 2016 We have released “J-STAGE BETA site”.
May 01, 2015 Please note the "spoofing mail" that pretends to be J-STAGE.