Proceedings of the Japan Academy, Series B
Online ISSN : 1349-2896
Print ISSN : 0386-2208
ISSN-L : 0386-2208
Original Articles
Toward the detection of gravitational waves under non-Gaussian noises I. Locally optimal statistic
Jun'ichi YOKOYAMA
Author information
JOURNAL FREE ACCESS FULL-TEXT HTML

2014 Volume 90 Issue 10 Pages 422-432

Details
Abstract

After reviewing the standard hypothesis test and the matched filter technique to identify gravitational waves under Gaussian noises, we introduce two methods to deal with non-Gaussian stationary noises. We formulate the likelihood ratio function under weakly non-Gaussian noises through the Edgeworth expansion and strongly non-Gaussian noises in terms of a new method we call Gaussian mapping where the observed marginal distribution and the two-body correlation function are fully taken into account. We then apply these two approaches to Student’s t-distribution which has a larger tails than Gaussian. It is shown that while both methods work well in the case the non-Gaussianity is small, only the latter method works well for highly non-Gaussian case.

1. Introduction

The large-scale cryogenic gravitational wave telescope (LCGT) now known as KAGRA started its construction in 2010 in deep underground at Kamioka mine. Recently its tunnel excavation, which started in early 2012,1) was successfully completed,2) and the installation of a laser interferometer3) has started toward the first direct detection of gravitational waves (GWs) in competition or cooperation with the advanced LIGO4) and the advanced Virgo5) detectors one century after Einstein proposed general relativity and predicted gravitational waves which propagate with the speed of light.

Among the four known elementary interactions in nature, gravity is by far the weakest force. Its high penetrating power would convey informations on deep inside celestial bodies and the very early Universe. On the other hand, this same property also makes it very difficult to catch the signal. This is why no one has ever succeeded in direct detection of gravitational waves. Indeed it is a problem to extract a tiny signal out of much larger detector noise, and it is attempted by using a matched filter of expected signals. If detector noise is distributed Gaussian, we have a fairly straight forward technique as reviewed in the next section. However, the actual noises are known to be highly non-Gaussian. Therefore we must invent appropriate methods to deal with these non-Gaussianity so that we would not miss the signal.

In this article, first we present a mini review of signal detection under Gaussian noise, and then introduce two methods toward detection of gravitational waves under non-Gaussian noises to prepare for the forthcoming KAGRA. So far several papers have been written to deal with non-Gaussian noises for GW observations.6)10) However, they are mostly based on some specific non-Gaussian distributions such as double Gaussian, exponential,6) χ2, and Student’s t-distributions.8) Here we try to be as model independent as possible, since we do not know the actual noise distribution a priori.

The rest of the paper is organized as follows. After a short review on signal detection under Gaussian noise in §2, in §3 we incorporate small deviation from Gaussian distribution in terms of the Edgeworth expansion and calculate the likelihood ratio with it. In §4 the case with strong non-Gaussian noise is handled by a new method called Gaussian mapping, and the likelihood ratio test is formulated with arbitrary non-Gaussian marginal distribution. Then in §5 we apply the results of §§3 and 4 to a non-Gaussian distribution called Student’s t-distribution which is a symmetric distribution with a larger tail than the corresponding Gaussian distribution. Finally §6 is devoted to conclusion.

2. Signal detection

Here we first present a minimal review on signal detection and describe optimal statistics and filter under Gaussian noise.*

*    Standard references for this section include (Refs. 11, 12, 13, 14).

2.1. Hypothesis testing.

We first consider a hypothesis test using a time sequence of a detector output x(t) and a GW signal (if any) h(t). There are two distinct cases with and without nonvanishing signals h(t). The former is expressed as H1 or simply by 1 when we measure the sum x(t) = n(t) + h(t) as the output, and the latter by H0 or 0 measuring only noises x(t) = n(t).

There are two kinds of errors associated with hypothesis testing. One is the false alarm (FA) claiming detection without actual signals, whose probability is given in terms of the conditional probability PFA = P(1|0). The other is false dismissal (FD) missing detection even if signal is there, given by the probability PFD = P(0|1). We wish to maximize the detection probability under a fixed FA probability or significance level.

An extremely useful theorem (although called lemma) is Neyman-Pearson’s Lemma.15) Using the likelihood ratio Λ(x),   

\begin{equation} \Lambda(x) \equiv \frac{P(1|0)}{P(0|1)}, \end{equation} [1]
we define a test such that   
\begin{align*} &\text{reject null hypothesis $H_{0}$ if$\quad \Lambda(x)>k$}\\ &\text{adopt null hypothesis $H_{0}$ if$\quad \Lambda(x)\leq k$} \end{align*}
where k satisfies   
\begin{equation*} P(\Lambda(x) > k|0) = \alpha. \end{equation*}
Then the lemma states that this is the most powerful test at the significance level α. In other words, this test gives the largest detection rate under a fixed false alarm rate α. If there is a free fitting parameter, one must first fix it by maximizing Λ(x) before applying the test.

This lemma shows the importance of the likelihood ratio or the noise distribution function.

2.2. Signal detection under Gaussian noise.

Here we consider the case with random Gaussian noise. Discretizing the time sequence as x(ti) ≡ xi the likelihood ratio reads   

\begin{align} \Lambda(x) &= \exp\biggl[-\frac{1}{2}(x_{i} - h_{i})(K^{-1})_{ij}(x_{j} - h_{j}) \\ &\quad + \frac{1}{2}x_{i}(K^{-1})_{ij}x_{j}\biggr] \\ &= \exp\left(q_{i}x_{i} - \frac{1}{2}h_{i}q_{i}\right) \end{align} [2]
where Kij = ⟨ninj⟩ is the noise covariance matrix or the two-time correlation function, and qi ≡ (K−1)ijhj or hi = Kijqj. Here and throughout, summation over repeated indices in the same term is assumed. In the continuum limit, we find   
\begin{equation} \langle n(t)n(t')\rangle\equiv K(t,t')\ \text{and}\ h(t) = \int K(t,t')q(t')dt'. \end{equation} [3]
Thus the log-likelihood ratio   
\begin{equation} \ln\Lambda(x) = \int q(t)x(t)dt - \frac{1}{2}\int q(t)h(t)dt \end{equation} [4]
can be maximized if a linear correlator   
\begin{equation} G\equiv\int q(t)x(t)dt \end{equation} [5]
is maximized. This is the linear matched filter for a known wave form h(t).

Let us summarize some properties of this matched filter. The expectation value without any signal is simply equal to zero, E0{G} = 0, whereas that with signal h(t) reads   

\begin{equation} E_{1}\{G\} = \int q(t)h(t)dt. \end{equation} [6]
The variance is given by   
\begin{equation} \text{Var}\{G\} = \int q(t)h(t)dt \end{equation} [7]
irrespective of whether signal exists or not. We thus find the signal-to-noise (SN) ratio   
\begin{equation} \frac{S}{N} = \frac{E_{1}\{G\}}{\sqrt{\text{Var}\{G\}}} = \sqrt{\int q(t)h(t)dt}. \end{equation} [8]
From this result we deduce that when the linear matched filter is maximized, the S/N ratio is also maximized for Gaussian noises.

2.3. Frequency domain.

It is often useful to work in the Fourier space with frequency f rather than dealing time sequence directly. Assuming stationary noise, its correlation function has time translational invariance, and we find   

\begin{align} \langle n(t)n(t')\rangle &= K(t - t') \\ &= \frac{1}{2}\int_{-\infty}^{\infty}S_{n}(|f|)e^{-2\pi if(t - t')}df, \end{align} [9]
where Sn(|f|) = 2Sn(f) is the one-sided power spectrum of noise. From   
\begin{align} h(t) &= \int\tilde{h}(f)e^{-2\pi ift}df \\ &= \int K(t - t')q(t')dt' \\ &= \frac{1}{2}\int S_{n}(|f|)\tilde{q}(f)e^{-2\pi if(t - t')}df \end{align} [10]
we find   
\begin{equation} \tilde{q}(f) = \frac{2\tilde{h}(f)}{S_{n}(|f|)}. \end{equation} [11]
We therefore obtain   
\begin{equation} G = \int q(t)x(t)dt = \int\frac{2\tilde{h}(f)\tilde{x}^{*}(f)}{S_{n}(|f|)}df, \end{equation} [12]
and the log-likelihood ratio   
\begin{align} \ln\Lambda(x) &= \int\frac{2\tilde{h}(f)\tilde{x}^{*}(f)}{S_{n}(|f|)}df \\ &\quad- \frac{1}{2}\int\frac{2\tilde{h}(f)\tilde{h}^{*}(f)}{S_{n}(|f|)}df, \end{align} [13]
for Gaussian noise.

2.4. Optimal filter.

One can consider the question which filter F(t) is optimal, or maximizes the S/N ratio. Defining   

\begin{equation*} G_{F} \equiv \int F(t)x(t)dt = \int\tilde{F}(f)\tilde{x}^{*}(f)df, \end{equation*}
we find the expectation value in the presence of a signal and the dispersion of GF are respectively given by   
\begin{align} &E_{1}\{G_{F}\} = \int\tilde{F}(f)\tilde{h}^{*}(f)df, \\ &\text{Var}\{G_{F}\} = \frac{1}{2}\int S_{n}(|f|)\tilde{F}(f)\tilde{F}^{*}(f)df. \end{align} [14]
If we define an inner product by   
\begin{equation*} [\tilde{a}(f),\tilde{b}(f)]\equiv \frac{1}{2}\int S_{n}(|f|)\tilde{a}(f)\tilde{b}^{*}(f)df, \end{equation*}
[14] is expressed as   
\begin{align*} &E_{1}\{G_{F}\} = \left[\tilde{F}(f),\frac{2\tilde{h}(f)}{S_{n}(|f|)}\right], \\ &\text{Var}\{G_{F}\} = [\tilde{F}(f),\tilde{F}(f)], \end{align*}
so that the S/N ratio is expressed as   
\begin{equation} \left(\frac{S}{N}\right)^{2} {}={} \cfrac{\biggl[\tilde{F}(f),\cfrac{2\tilde{h}(f)}{S_{n}(|f|)}\biggr]^{2}}{[\tilde{F}(f),\tilde{F}(f)]}. \end{equation} [15]
This shows that the S/N ratio is maximized when $\tilde{F}(f) = 2\tilde{h}(f)/S_{n}(|f|)$. That is, the linear matched filter [12] used in the Gaussian likelihood function [13] maximizes the S/N ratio.

Note, however, that the above property holds if and only if noise distribution is Gaussian. For generic non-Gaussian distributions, there is no reason that the linear matched filter [12] is optimal. We should instead consider some non-linear filters in general. In the next two sections, instead, we consider the likelihood ratio test under non-Gaussian noises directly.

2.5. Locally optimal statistic.

Before proceeding, however, we define a locally optimal statistic.16) In the actual detection of GWs, the wave form has a number of undetermined parameters. As mentioned above, we maximize the likelihood function over these parameters. Among these undetermined parameter is an overall amplitude of GWs, ε, which depends on the distance to the source. Let us consider the case it is the only remaining parameter, quantifying N discrete sequence of the data as $x_{i} = n_{i} + \epsilon \hat{h}_{i}$ with $\sum_{i = 1}^{N}| \hat{h}_{i}|^{2} = 1$.

Then the likelihood ratio depends on ε as Λ(x;ε) = P(x|ε)/P(x|0). If the amplitude was large enough we could detect GW without any sophisticated statistical treatments, so what matters as a likelihood test is the case ε is very small. We may therefore expand the likelihood ratio with respect to ε as   

\begin{equation*} \Lambda(x;\epsilon) = 1 + \epsilon\Lambda_{1}(x) + \frac{\epsilon^{2}}{2}\Lambda_{2}(x) + \ldots \end{equation*}
The locally optimal statistic is defined by the first-order coefficient   
\begin{align} \Lambda_{1}(x) &= \frac{1}{P(x|0)}\frac{d}{d\epsilon}P(x|\epsilon)\bigg|_{\epsilon=0} \\ &= \frac{d}{d\epsilon}\ln\Lambda(x;\epsilon)\bigg|_{\epsilon=0}, \end{align} [16]
and it controls the entire likelihood ratio in case GW amplitude is small.

3. Signal detection under weakly non-Gaussian noise distribution

Since Gaussian noise is fully characterized by the covariance matrix or the two-body correlation function, any nonvanishing higher-order cumulants or reduced correlation functions are signatures of non-Gaussianity of the probability distribution function (PDF), P(x). We incorporate effects of these nonvanishing higher-order cumulants to PDF and likelihood function to find the locally optimal statistics. The Edgeworth expansion17) provides such a framework to incorporate them around the normal distribution, $\varphi (y) \equiv e^{ - y^{2}/2}/\sqrt{2\pi } $, in relation with the central limit theorem. Here and hereafter, the variable y (as well as yi or yp introduced later) denotes a normalized quantity with a vanishing mean and a unit variance such as yxx with $\sigma _{x} \equiv \sqrt{\langle x^{2}\rangle } $.

First we expand P(y) in terms of φ(y) and its derivative as   

\begin{align} P(y) &= \sum_{r=0}^{\infty}\frac{c_{r}}{r!}\varphi^{(r)}(y) \\ &= \sum_{r=0}^{\infty}\frac{c_{r}}{r!}(-)^{r}H_{r}(y)\varphi(y) \end{align} [17]
where the second equality follows from the definition of the Hermite polynomial. From its orthonormality, the coefficients are given by   
\begin{equation} c_{r} = (-)^{r}\int_{-\infty}^{\infty}H_{r}(y)P(y)dy. \end{equation} [18]
Using   
\begin{align} &H_{0}(y) = 1,\quad H_{1}(y) = y,\quad H_{2}(y) = y^{2} - 1,\quad \\ &H_{3}(y) = y^{3} - 3y,\quad H_{4}(y) = y^{4} - 6y^{2} + 3,\ldots \end{align} [19]
we find   
\begin{align} &c_{0} = 1,\quad c_{1} = 0,\quad c_{2} = 0,\quad \\ &c_{3} = -\langle y^{3}\rangle,\quad c_{4} = \langle y^{4}\rangle - 3,\ldots. \end{align} [20]

Now let us consider a random variable ξ which is a sum of $\tilde{N}$ statistically independent variables with the same mean ⟨ξj⟩ = m1 and the variance $\langle (\xi _{j} - m_{1})^{2}\rangle = \sigma _{1}^{2}\ (j = 1,2, \ldots \tilde{N})$ as   

\begin{equation*} \xi = \xi_{1} + \xi_{2} + \ldots + \xi_{\tilde{N}}. \end{equation*}
The central limit theorem asserts that the PDF, P(y), of a variable y = (ξ − m)/σ, where m = ⟨ξ⟩ and $\sigma ^{2} = \langle (\xi - m)^{2}\rangle = \tilde{N}\sigma _{1}^{2}$ are the mean and the variance of ξ, approaches the normal distribution as $\tilde{N}$ increases.

We can relate the characteristic function for the PDF P(y), Ψ(z), with that of one component PDF, P1(yi) with yi ≡ (ξim1)/σ1 as   

\begin{align} \Psi(z)&\equiv\int_{-\infty}^{\infty}e^{izy}P(y)dy = \langle e^{iz\frac{\xi-m}{\sigma}}\rangle \\ &= \left\langle\exp\left[iz\sum_{j=1}^{\tilde{N}}\frac{\xi_{j} - m_{1}}{\sqrt{\tilde{N}}\sigma_{1}}\right]\right\rangle \\ &= \left[\Psi_{1}\left(\frac{z}{\sqrt{\tilde{N}}}\right)\right]^{\tilde{N}}, \end{align} [21]
where Ψ1(z) is the characteristic function of P1(yi). Now we define the following expansion   
\begin{equation} e^{\frac{z^{2}}{2}}\Psi(z) = \int_{-\infty}^{\infty}e^{\frac{z^{2}}{2} + izy}P(y)dy\equiv\sum_{m=0}^{\infty}\frac{\tilde{c}_{m}}{m!}(-iz)^{m} \end{equation} [22]
and compare with [17]. Then using the identity,   
\begin{equation} \int e^{izy}\varphi^{(m)}(y)dy = (-iz)^{m}e^{-\frac{z^{2}}{2}}, \end{equation} [23]
as well as the following relation between the characteristic function and the generating function, Φ1, of y1’s cumulant λm,   
\begin{equation} \Psi_{1}(z) = e^{\Phi_{1}(z)},\quad\Phi_{1}(z) = \sum_{m = 0}^{\infty}\frac{\lambda_{m}}{m!}z^{m}, \end{equation} [24]
we find   
\begin{align} e^{\frac{z^{2}}{2}}\Psi(z) &= \sum_{m=0}^{\infty}\frac{c_{m}}{m!}(-iz)^{m} \\ &= \exp\left[\tilde{N}\sum_{j=3}^{\infty}\frac{\lambda_{j}}{j!}\left(\frac{iz}{\sqrt{{\tilde{N}}}}\right)^{j}\right] \\ &= \sum_{\ell = 0}^{\infty}\frac{\tilde{N}^{\ell}}{\ell!}\left[\sum_{j=3}^{\infty}\frac{\lambda_{j}}{j!}\left(\frac{iz}{\sqrt{\tilde{N}}}\right)^{j}\right]^{\ell}. \end{align} [25]
Then the j-th coefficient can be expressed by j-th and lower-order normalized cumulants and inverse powers of $\tilde{N}$; for example,   
\begin{align} &c_{3} = -\frac{\lambda_{3}}{\sqrt{\tilde{N}}},\quad c_{4} = \frac{\lambda_{4}}{\tilde{N}},\quad \\ &c_{5} = -\frac{\lambda_{5}}{\tilde{N}^{3/2}},\quad c_{6} = \frac{\lambda_{6}}{\tilde{N}^{2}} + 10\frac{\lambda_{3}^{2}}{\tilde{N}},\ldots \end{align} [26]

The Edgeworth expansion is obtained by rearranging these coefficients in powers of $\tilde{N}^{ - 1/2}$ as   

\begin{align} P(y) &= \varphi(y) - \frac{\gamma_{3}}{3!}\varphi^{(3)}(y) + \frac{\gamma_{4}}{4!}\varphi^{(4)}(y) \\ &\quad+ \frac{10\gamma_{3}^{2}}{6!}\varphi^{(6)}(y) + \ldots, \end{align} [27]
or equivalently,   
\begin{align} P(y) &= \biggl[1 + \frac{\gamma_{3}}{3!}H_{3}(y) + \frac{\gamma_{4}}{4!}H_{4}(y) \\ &\quad+ \frac{10\gamma_{3}^{2}}{6!}H_{6}(y) + \ldots\biggr]\varphi(y), \end{align} [28]
where $\gamma _{3} \equiv \lambda _{3}/\sqrt{{\tilde{N}}} $ and $\gamma _{4} \equiv \lambda _{4}/\tilde{N}$.

This is the way we can incorporate higher-order cumulants around the otherwise Gaussian distribution. We note that although the derivation here is based on the approach to the Gaussian distribution on the basis of the central limit theorem, the same type Edgeworth expansion can also be found in the context of nonlinear evolution of density fluctuations starting from random Gaussian linear fluctuations.18) In our problem, we of course put y = x/σ where x is the detector output. But we may have a different control parameter than $\tilde{N}$ which quantifies deviation from Gaussian depending on the nature of the underlying distribution.

Extention to a multivariate case around Gaussian PDF of N discretized time sequence of noises ni is possible starting with the original noise PDF in the Gaussian limit,   

\begin{align} &P[\{n_{i}\}]d^{N}n \\ &\quad= \frac{1}{\sqrt{(2\pi)^{N}\|K\|}}\exp\left[-\frac{1}{2}\sum_{j,\ell = 1}^{N}n_{j}(K^{-1})_{j\ell}n_{\ell}\right]d^{N}n,\quad \\ &\qquad\|K\|\equiv\det K \end{align} [29]
which can be diagonalized by a real unitary matrix U as $n_{j}=U_{j\ell}\psi_{\ell} $   
\begin{align} n_{j}(K^{-1})_{j\ell}n_{\ell} &= \psi_{j}(U^{\dagger}K^{-1}U)_{j\ell}\psi_{\ell} \\ &= \sum_{p=1}^{N}\Lambda_{p}\psi_{p}^{2}\equiv\sum_{p=1}^{N}y_{p}^{2},\quad \\ &\quad\Lambda_{p} = \frac{1}{\sigma_{p}^{2}},\quad y_{p} = \frac{\psi_{p}}{\sigma_{p}}, \end{align} [30]
where Λp (p = 1 − N) are eigenvalues of the matrix K−1. For each yp we can apply the Edgeworth expansion independently, to yield   
\begin{align} P_{EW}[\{y_{i}\}] &= \prod_{p = 1}^{N}\Biggl[1 + \frac{\gamma_{3}^{(p)}}{3!}H_{3}(y_{p}) + \frac{\gamma_{4}^{(p)}}{4!}H_{4}(y_{p}) \\ &\quad+ \frac{10\gamma_{3}^{(p)2}}{6!}H_{6}(y_{p}) + \ldots\Biggr]\varphi(y_{p}) \end{align} [31]
The log likelihood function ratio reads   
\begin{align} \ln\Lambda_{EW} &= (\text{Gaussian part}) + \sum_{p=1}^{N}\ln\Biggl[1 + \frac{\gamma_{3}^{(p)}}{3!}H_{3}(\hat{y}_{p}) \\ &\quad+ \frac{\gamma_{4}^{(p)}}{4!}H_{4}(\hat{y}_{p}) + \frac{10\gamma_{3}^{(p)2}}{6!}H_{6}(\hat{y}_{p}) + \ldots\Biggr]\\ &\quad-\sum_{p=1}^{N}\ln\Biggl[1 + \frac{\gamma_{3}^{(p)}}{3!}H_{3}(y_{p}) + \frac{\gamma_{4}^{(p)}}{4!}H_{4}(y_{p}) \\ &\quad+ \frac{10\gamma_{3}^{(p)2}}{6!}H_{6}(y_{p}) + \ldots\Biggr] \end{align} [32]
where   
\begin{equation*} y_{p} = \frac{\psi_{p}}{\sigma_{p}} = \frac{1}{\sigma_{p}}U_{pj}^{\dagger}n_{j},\quad\hat{y}_{p} = \frac{1}{\sigma_{p}}U_{pj}^{\dagger}(n_{j} - \epsilon\hat{h}_{j}) \end{equation*}
  
\begin{align} &\frac{d\ln\Lambda_{EW}}{d\epsilon}\bigg|_{\epsilon=0} \\ &\quad= (\text{Gaussian part}) \\ &\qquad- \sum_{p=1}^{N}\cfrac{\cfrac{\gamma_{3}^{(p)}}{3!}H'_{3}(y_{p}) + \cfrac{\gamma_{4}^{(p)}}{4!}H'_{4}(y_{p}) + \ldots}{1 + \cfrac{\gamma_{3}^{(p)}}{3!}H_{3}(y_{p}) + \cfrac{\gamma_{4}^{(p)}}{4!}H_{4}(y_{p}) + \ldots}U_{pj}^{\dagger}\frac{\hat{h}_{j}}{\sigma_{p}} \end{align} [33]
If we expand logarithm in the right hand side of [32], which is valid if deviation from Gaussian is small, we find   
\begin{align*} {\ln\Lambda_{EW}}&{{}\simeq(\text{Gaussian part}) + \sum_{p=1}^{N}\frac{\gamma_{3}^{(p)}}{3!}[H_{3}(\hat{y}_{p}) - H_{3}(y_{p})]} \\ &{\quad+ \sum_{p=1}^{N}\frac{\gamma_{4}^{(p)}}{4!}[H_{4}(\hat{y}_{p}) - H_{4}(y_{p})] + \ldots} \end{align*}
From this expression, the locally optimal statistic reads   
\begin{align} &\frac{d\ln\Lambda_{EW}}{d\epsilon}\bigg|_{\epsilon=0}\\ &\quad\simeq(\text{Gaussian part}) \\ &\qquad- \sum_{p=1}^{N}\left[\frac{\gamma_{3}^{(p)}}{2}(y_{p}^{2}-1) + \frac{\gamma_{4}^{(p)}}{6}(y_{p}^{3} - 3y_{p})\right]U_{pj}^{\dagger}\frac{\hat{h}_{j}}{\sigma_{p}} + \ldots \end{align} [34]

For the case of stationary noise, we can obtain more explicit expression by virtue of the discrete Fourier transformation. Suppose that the data is sampled with a time interval Δ from t = 0 to TobsNΔ, namely, at tj = jΔ (j = 1, 2, …, N). Then the discrete Fourier transform of n(tj) is given as   

\begin{equation} \tilde{n}(f_{m}) = \Delta\sum_{j=1}^{N}n(t_{j})e^{2\pi i\frac{jm}{N}} \end{equation} [35]
at $f_{m} \equiv \frac{m}{N\Delta }$ with $m = - \frac{N}{2}$, $ - \frac{N}{2} + 1, \ldots ,0,1, \ldots ,\frac{N}{2}$, assuming N is an even number. Its inverse reads   
\begin{equation} n(t_{j}) = \frac{1}{N\Delta}\sum_{m=1}^{N}\tilde{n}(f_{m})e^{-2\pi i\frac{jm}{N}}. \end{equation} [36]

In the stationary Gaussian PDF   

\begin{align} &P[\{n_{i}\}]d^{N}n \\ &\quad= \frac{1}{\sqrt{(2\pi)^{N}\|K\|}}\\ &\qquad\times\exp\left[-\frac{1}{2}\sum_{j,\ell=1}^{N}n(t_{j})K^{-1}(t_{j} - t_{\ell})n(t_{\ell})\Delta^{2}\right]d^{N}n, \end{align} [37]
we can expand the inverse covariance function as   
\begin{equation} K^{-1}(t_{j} - t_{\ell}) = \frac{1}{N\Delta}\sum_{p=1}^{N}\widetilde{K^{-1}}(f_{p})e^{-2\pi i(j-\ell)\frac{p}{N}} \end{equation} [38]
so that we find   
\begin{align} &{\sum_{j,\ell=1}^{N}n(t_{j})K^{-1}(t_{j} - t_{\ell})n(t_{\ell})\Delta^{2}} \\ &{\quad= \frac{1}{N^{3}\Delta^{3}}\sum_{m,p,q=1}^{N}\tilde{n}(f_{m})\widetilde{K^{-1}}(f_{p})\tilde{n}(f_{q})\Delta^{2}N\delta_{m+p,N}N\delta_{q-p,0}}\\ &{\quad=\frac{1}{N\Delta}\sum_{p=1}^{N}\tilde{n}(f_{N-p})\widetilde{K^{-1}}(f_{p})\tilde{n}(f_{p})} \\ &{\quad= \frac{1}{N\Delta}\sum_{p=1}^{N}\widetilde{K^{-1}}(f_{p})|\tilde{n}(f_{p})|^{2}} \end{align} [39]
where we have used an identity $\tilde{n}(f_{p}) = \tilde{n}^{*}(f_{N - p})$.

Using the discretized version of eq. [9], we find   

\begin{align} &\widetilde{K^{-1}}(f_{p}) = \frac{2}{S_{n}(f_{p})},\quad \\ \text{and}\quad &\langle|\tilde{n}(f_{p})|^{2}\rangle = \frac{1}{2}S_{n}(f_{p})T_{obs} \equiv 2\tilde{\sigma}_{p}^{2}. \end{align} [40]
It is simpler to consider two real valued quantities, $\tilde{n}_{R}(f_{p}) \equiv \text{Re}\,\tilde{n}(f_{p})$ and $\tilde{n}_{I}(f_{p}) \equiv \text{Im}\,\tilde{n}(f_{p})$ than dealing with a complex variable $\tilde{n}(f_{p})$ itself. Under the distribution [37], these variables satisfy $\langle \tilde{n}_{R}^{2}(f_{p})\rangle = \langle \tilde{n}_{I}^{2}(f_{p})\rangle = S_{n}(f_{p})T_{obs}/4$ and $\langle \tilde{n}_{R}(f_{p})\tilde{n}_{I}(f_{p})\rangle = 0$, and they do not have any correlation with modes of different frequencies.

Hence we can identify   

\begin{equation} y_{p1}\equiv\frac{\tilde{n}_{R}(f_{p})}{\tilde{\sigma}_{p}}\quad\text{and}\quad y_{p2}\equiv\frac{\tilde{n}_{I}(f_{p})}{\tilde{\sigma}_{p}} \end{equation} [41]
in [30]. Then the locally optimal statistic corresponding to [34] reads   
\begin{align} &\frac{d\ln\Lambda_{EW}}{d\epsilon}\bigg|_{\epsilon=0}\\ &\quad\simeq(\text{Gaussian part}) - \sum_{i=1}^{2}\sum_{p_{i}=1}^{N}\Biggl[\frac{\gamma_{3}^{(p_{i})}}{2}(y_{p_{i}}^{2} - 1) \\ &\qquad+ \frac{\gamma_{4}^{(p_{i})}}{6}(y_{p_{i}}^{3} - 3y_{p_{i}}) + \ldots\Biggr]\frac{\widetilde{\hat{h}}_{i}(f_{p_{i}})}{\tilde{\sigma}_{p_{i}}}, \end{align} [42]
with $\widetilde{{\hat{h}}}_{1}(f_{p_{i}}) \equiv \text{Re}\,\widetilde{{\hat{h}}}(f_{p_{i}})$ and $\widetilde{{\hat{h}}}_{2}(f_{p_{i}}) \equiv \text{Im}\,\widetilde{{\hat{h}}}(f_{p_{i}})$ where $\widetilde{{\hat{h}}}(f_{p_{i}})$ is the Fourier transform of $\hat{h}(t_{j})$.

4. Gaussian mapping

The Edgeworth expansion discussed in the previous section works if and only if deviation from Gaussian is small. In order to deal with more realistic cases with larger deviation from Gaussian, we introduce the following new method which we call Gaussian mapping. In this approach the observed one-point PDF or the marginal distribution can be an arbitrary non-Gaussian distribution and also two body correlation function can be fully taken into account. Since it is a formidable task to determine higher order correlation functions of noises observationally, this method makes use of and reproduces as much observational information as possible.

We start with a multivariate random Gaussian   

\begin{align} &P_{\phi}[\{\phi_{i}\}]d^{N}\phi \\ &\quad= \frac{1}{\sqrt{(2\pi)^{N}\|\zeta\|}}\exp\left[-\frac{1}{2}\phi_{j}(\zeta^{-1})_{j\ell}\phi_{\ell}\right]d^{N}\phi \end{align} [43]
with   
\begin{equation*} \zeta_{j\ell} = \langle\phi_{j}\phi_{\ell}\rangle,\quad\zeta_{jj} = 1,\quad|\zeta_{j\ell}|\leq 1, \end{equation*}
where $\|\zeta\| $ denotes the determinant of the covariance matrix ζij. Suppose that the noise n(ti) is a function of ϕi, n(ti) = Qi] at each time and that there exists an inverse function of Q, Q−1 so that   
\begin{align} &\phi_{i} = Q^{-1}[n(t_{i})]\equiv g[n(t_{i})] = g(n_{i}),\quad \\ &d\phi_{i} = g'(n_{i})dn_{i}. \end{align} [44]
Then the multivariate PDF for ni reads   
\begin{align} &{P[\{n_{i}\}]d^{N}n} \\ &\quad = {\frac{1}{\sqrt{(2\pi)^{N}\|\zeta\|}}\exp\left[-\frac{1}{2}g(n_{j})(\zeta^{-1})_{j\ell}g(n_{\ell})\right]\prod_{p=1}^{N}g'(n_{p})dn_{p}} \end{align} [45]
The one-point PDF reads   
\begin{align} P(n_{j})dn_{j} &= P_{\phi}[\phi_{j} = Q^{-1}(n_{j})]g'(n_{j})dn_{j} \\ &= \frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}g^{2}(n_{j})}g'(n_{j})dn_{j}. \end{align} [46]
This is to be determined by observations of the “training” samples. The cumulative PDF   
\begin{equation*} \hat{P}(n)\equiv\int_{0}^{n}P(n')dn' \end{equation*}
is given by   
\begin{align} &\hat{P}(n) = \frac{1}{\sqrt{2\pi}}\int_{0}^{g(n)}e^{-\frac{1}{2}y^{2}}dy = \frac{1}{\sqrt{\pi}}\mathrm{Erf}\left(\frac{g(n)}{\sqrt{2}}\right),\quad \\ &\mathrm{Erf} x\equiv\int_{0}^{x}e^{-t^{2}}dt. \end{align} [47]
We therefore find   
\begin{align} g(n) &= \sqrt{2}\mathrm{Erf}^{-1}(\sqrt{\pi}\hat{P}(n)) \\ &= \sqrt{2}\mathrm{Erf}^{-1}\left(\sqrt{\pi}\int_{0}^{n}P(n')dn'\right). \end{align} [48]
For an arbitrary well behaved one point PDF, P(n), we can find a function g(n) as above.

Next we incorporate two-point correlation function, which is important when noises cannot be “whitened” in a pre-process of data analysis pipneline with a relatively long correlation time.   

\begin{align} &\langle n(t_{m})n(t_{n})\rangle \\ &\quad= \langle Q[\phi_{m}]Q[\phi_{n}]\rangle_{\phi - \text{Gaussian}} \\ &\quad= \int d^{N}\phi\frac{Q(\phi_{m})Q(\phi_{n})}{\sqrt{(2\pi)^{N}\|\zeta\|}}\exp\left[-\frac{1}{2}\phi_{j}(\zeta^{-1})_{j\ell}\phi_{\ell}\right]. \end{align} [49]
Using Fourier transform   
\begin{equation*} Q(\phi_{j}) = \int dk_{j}e^{-2\pi ik_{j}\phi_{j}}\tilde{Q}(k_{j}), \end{equation*}
we find   
\begin{align} &{\langle n(t_{m})n(t_{n})\rangle} \\ &{\quad= \int dk_{m}dk_{n}\frac{\tilde{Q}(k_{m})\tilde{Q}^{*}(k_{n})}{\sqrt{(2\pi)^{N}\|\zeta\|}}}\\ &{\qquad\times \exp\biggl[- 2\pi i(k_{m}\phi_{m} - k_{n}\phi_{n}) - \frac{1}{2}\phi_{j}(\zeta^{-1})_{j\ell}\phi_{\ell}\biggr]d\phi_{m}d\phi_{n}}\\ &{\quad=\int dk_{m}dk_{n}\tilde{Q}(k_{m})\tilde{Q}^{*}(k_{n})}\\ &{\qquad\times\exp[-2\pi^{2}(k_{m}^{2} - 2k_{m}k_{n}\zeta_{mn} + k_{n}^{2})]} \end{align} [50]
  
\begin{align} &{\quad=\int\frac{1}{2}\tilde{Q}\left(\frac{u + v}{2}\right)\tilde{Q}\left(\frac{u - v}{2}\right)}\\ &{\qquad\times\exp[-\pi^{2}(1-\zeta_{mn})u^{2} - \pi^{2}(1 + \zeta_{mn})v^{2}]dudv} \end{align} [51]

In case n is pure Gaussian we find n = Q(ϕ) = σϕ with σ being the dispersion of the noise. Then the Fourier transform of Q(ϕ) is given by   

\begin{equation} \tilde{Q}(k) = \frac{\sigma}{2\pi i}\delta'(k), \end{equation} [52]
which reproduces ⟨n(tm)n(tn)⟩ = σ2ζmn from [50] as it should be.

If, on the other hand, $\tilde{Q}(k)$ is a smooth function around k = 0, we can estimate [51] in terms of a saddle point approximation to yield   

\begin{align} \langle n(t_{m})n(t_{n})\rangle&\equiv K(t_{m},t_{n})\\ &\simeq \frac{|\tilde{Q}(0)|^{2}}{2\pi^{3}}\frac{1}{\sqrt{1 - \zeta_{mn}^{2}}}. \end{align} [53]
So if we obtain K(tm, tn) = Kmn observationally, we can find ζmn as   
\begin{equation} \zeta_{mn}\simeq\left[1 - \left(\frac{|\tilde{Q}(0)|^{2}}{2\pi^{3}K_{mn}}\right)^{2}\right]^{1/2}. \end{equation} [54]

In the actual application, we should evaluate [51] numerically to relate ⟨n(tm)n(tn)⟩ with ζmn. Note that if noise is stationary, ζmn is also stationary. Once these correspondences are achieved, one can calculate the log likelihood function as   

\begin{align} \ln\Lambda_{GM} &= \sum_{j,\ell=1}^{N}\biggl[-\frac{1}{2}g(x_{j} - \epsilon\hat{h}_{j})(\zeta^{-1})_{j\ell}g(x_{\ell} - \epsilon\hat{h}_{\ell}) \\ &\quad+ \frac{1}{2}g(x_{j})(\zeta^{-1})_{j\ell}g(x_{\ell})\biggr] + \sum_{j=1}^{N}\ln\frac{g'(x_{j} - \epsilon\hat{h}_{j})}{g'(x_{j})}, \end{align} [55]
and the locally optimal statistic reads   
\begin{align} \frac{d\ln\Lambda_{GM}}{d\epsilon}\bigg|_{\epsilon=0} &= \sum_{j,\ell=1}^{N}g'(x_{j})(\zeta^{-1})_{j\ell}g(x_{\ell})\hat{h}_{j} \\ &\quad- \sum_{j=1}^{N}\frac{g''(x_{j})}{g'(x_{j})}\hat{h}_{j}. \end{align} [56]

So far we have developed a new formalism primarily for real time analysis dealing with a time sequence of data directly. This method, however, may be applied to the analysis in the frequency domain, too, if we assume one-point PDF of each normalized Fourier mode, $\tilde{n}(f)$/$\sqrt{\langle |\tilde{n}(f)|{}^{2}\rangle} $, satisfies the same distribution function in analogy to [44]. In the next section we employ a specific non-Gaussian model to test the performance of the Edgeworth expansion and Gaussian mapping based on the above treatment.

5. Application to Student’s t-distribution

Here we compare the results of the previous two sections using a specific non-Gaussian model, namely, Student’s t-distribution as an example. This distribution is an even function with a longer and larger tails than the Gaussian distribution with the same variance. Let us work in the Fourier space and consider a single frequency mode, $\tilde{x}(f)$, assuming that there is no correlation with the other frequencies. Let us further assume that the noise has random phase, so that $\text{Re}\,\tilde{n}(f) \equiv \tilde{n}_{R}$ and $\text{Im}\,\tilde{n}(f) \equiv \tilde{n}_{I}$ have no bilinear correlation, $\langle \tilde{n}_{R}(f)\tilde{n}_{I}(f)\rangle = 0$. Then these two variables satisfy a bivariate t-distribution.   

\begin{align} &P_{2}(\tilde{n}_{R},\tilde{n}_{I})d\tilde{n}_{R}d\tilde{n}_{I} \\ &\quad= \frac{1}{2\pi}\frac{m}{(m - 2)\tilde{\sigma}^{2}}\left[1 + \frac{\tilde{n}_{R}^{2} + \tilde{n}_{I}^{2}}{(m - 2)\tilde{\sigma}^{2}}\right]^{-\frac{m+2}{2}}d\tilde{n}_{R}d\tilde{n}_{I}, \end{align} [57]
where $\tilde{\sigma }^{2} \equiv \langle \tilde{n}_{R}^{2}\rangle = \langle \tilde{n}_{I}^{2}\rangle = \langle |\tilde{n}(f)|^{2}\rangle /2$ and m, which we assume is larger than 4, denotes a parameter called the degree of freedom. This distribution approaches Gaussian as we take m → ∞.

Let us consider the normalized variables $y_{1} \equiv \tilde{n}_{R}/\tilde{\sigma }$ and $y_{2} \equiv \tilde{n}_{I}/\tilde{\sigma }$ with the PDF   

\begin{align} &P_{2t}(y_{1},y_{2})dy_{1}dy_{2} \\ &\quad= \frac{m}{2\pi(m - 2)}\left(1 + \frac{y_{1}^{2} + y_{2}^{2}}{m - 2}\right)^{-\frac{m+2}{2}}dy_{1}dy_{2}, \end{align} [58]
hereafter.

The locally optimal statistic calculated from this PDF is given by   

\begin{align} &\frac{d\ln\Lambda_{2t}(y_{1},y_{2})}{d\epsilon}\bigg|_{\epsilon=0} \\ &\quad= \frac{m + 2}{m - 2 + y_{1}^{2} + y_{2}^{2}}\left(y_{1}\frac{\widetilde{\hat{h}}_{1}}{\tilde{\sigma}} + y_{2}\frac{\widetilde{\hat{h}}_{2}}{\tilde{\sigma}}\right), \end{align} [59]
again with $\widetilde{{\hat{h}}}_{1} \equiv \text{Re}\,\widetilde{{\hat{h}}}(f)$ and $\widetilde{{\hat{h}}}_{2} \equiv \text{Im}\,\widetilde{{\hat{h}}}(f)$.

The Edgeworth expansion of [58] can be manipulated straight forwardly thanks to the random phase nature which yields ⟨y1y2⟩ = 0. The fact that odd-order cumulants all vanish makes the expression even simpler. In the t-distribution, the inverse of the degree of freedom, 1/m, acts as the expansion parameter to quantify the approach to the Gaussian distribution in large m limit. Up to $\mathcal{O}(m^{ - 3/2})$ we find   

\begin{align} &P_{EW}(y_{1},y_{2}) \\ &\quad= \left[1 + \frac{\kappa_{4}}{4!}H_{4}(y_{1}) + \ldots\right] \\ &\qquad\times\left[1 + \frac{\kappa_{4}}{4!}H_{4}(y_{2}) + \ldots\right]\varphi(y_{1})\varphi(y_{2})\\ &\quad\cong\left[1 + \frac{\kappa_{4}}{4!}(y_{1}^{4} - 6y_{1}^{3} + y_{2}^{4} - 3y_{2}^{3} + 6)\right]\varphi(y_{1})\varphi(y_{2}). \end{align} [60]
Here κ4 is the fourth cumulant of y1 and y2 calculated by their marginal distribution   
\begin{align} P_{1t}(y) &= \int P_{2t}(y,y_{2})dy_{2} \\ &= \frac{1}{\sqrt{(m - 2)\pi}}\cfrac{\Gamma\biggl(\cfrac{m + 1}{2}\biggr)}{\Gamma\biggl(\cfrac{m}{2}\biggr)}\left(1 + \frac{y^{2}}{m - 2}\right)^{-\frac{m + 1}{2}}, \end{align} [61]
that is, $\kappa _{4} = \langle y_{1}^{4}\rangle - 3 = \frac{6}{m - 4}$.

As a result the locally optimal statistic corresponding to [33] is given by   

\begin{align} &\frac{d\ln\Lambda_{EW}}{d\epsilon}\bigg|_{\epsilon=0} \\ &\quad= \left[y_{1} - \frac{4y_{1}^{3} - 12y_{1}}{4(m - 4) + y_{1}^{4} - 6y_{1}^{2} + 3}\right]\frac{\widetilde{\hat{h}}_{1}}{\tilde{\sigma}} \\ &\qquad+ \left[y_{2} - \frac{4y_{2}^{3} - 12y_{2}}{4(m - 4) + y_{2}^{4} - 6y_{2}^{2} + 3}\right]\frac{\widetilde{\hat{h}}_{2}}{\tilde{\sigma}} \end{align} [62]
  
\begin{equation} \quad\simeq\left(1 + \frac{3 - y_{1}^{2}}{m}\right)y_{1}\frac{\widetilde{\hat{h}}_{1}}{\tilde{\sigma}} + \left(1 + \frac{3 - y_{2}^{2}}{m}\right)y_{2}\frac{\widetilde{\hat{h}}_{2}}{\tilde{\sigma}}, \end{equation} [63]
the last expression being valid for $m \gg \max (4,y_{1}^{4}/4,y_{2}^{4}/4)$.

Now let us turn to the analysis in the Gaussian mapping method. In the two variable system at hand, thanks to the property ⟨y1y2⟩ = 0 again, this method is equivalent with treating y1 and y2 as fully independent variables with no mutual correlations. Hence in this method the PDF is given by a product of the marginal distributions [61],   

\begin{equation} P_{GM}(y_{1},y_{2}) = P_{1t}(y_{1})P_{1t}(y_{2}). \end{equation} [64]
One could in principle calculate g(n) function introduced in the previous section to use the formula [56], but in practice the same result can be obtained by using [64] more directly in this case, to yield   
\begin{align} &\frac{d\ln\Lambda_{GM}}{d\epsilon}\bigg|_{\epsilon = 0} \\ &\quad= \frac{(m + 1)y_{1}}{m - 2 + y_{1}^{2}}\frac{\widetilde{\hat{h}}_{1}}{\tilde{\sigma}} + \frac{(m + 1)y_{2}}{m - 2 + y_{2}^{2}}\frac{\widetilde{\hat{h}}_{2}}{\tilde{\sigma}} \end{align} [65]
  
\begin{equation} \quad\simeq\left(1 + \frac{3 - y_{1}^{2}}{m}\right)y_{1}\frac{\widetilde{\hat{h}}_{1}}{\tilde{\sigma}} + \left(1 + \frac{3 - y_{2}^{2}}{m}\right)y_{2}\frac{\widetilde{\hat{h}}_{2}}{\tilde{\sigma}}, \end{equation} [66]
the latter being valid for $m \gg \max (4,y_{1}^{2},y_{2}^{2})$. In this limit, the Edgeworth expansion and the Gaussian mapping yield the same result. However, when m, y1, and y2 do not satisfy aforementioned inequalities, we must use [62] and [65] directly.

Let us study performances of the Edgeworth expansion [62] and the Gaussian mapping [65] comparing them with the result of the bivariate t-distribution [59]. First Fig. 1 shows single variate t-distribution with unit variance ⟨y2⟩ = 1 for m = 30 and 6 together with the normal distribution. As is seen there t-distribution with smaller m has larger tail.

Fig. 1.

Probability distribution functions of Normal distribution φ(y) (solid line) and Student’s t-distribution [61] with m = 30 (long dashed line) and 60 (short dashed line), both normalized to have a unit variance.

For simplicity of illustration, let us focus on the case $\widetilde{{\hat{h}}}_{2}$ = 0 and depict the dependence of the locally optimal statistic along y1 direction. Figure 2 shows the locally optimal statistic, which is expressed in unit of $\widetilde{{\hat{h}}}_{{1}}$/$\tilde{\sigma}$, as a function of y1 for the case m = 30. In this figure we have taken a somewhat atypical value of y2 = 2 because for smaller |y2| the result of the Gaussian mapping is indistinguishable from that of the exact bivariate t-distribution [59]. As is seen there even the Edgeworth expansion works well for $|y_{1}| \lesssim 3$.

Fig. 2.

Locally optimal statistics for Student’s t-distribution (solid line), its Edgeworth expansion (short dashed line), and the result of Gaussian mapping (long dashed line) for the case m = 30 and $y_{2}^{2} = 4$. The vertical axis is expressed in unit of $\widetilde{{\hat{h}}}_{1}/\tilde{\sigma }$.

Figure 3, on the other hand, shows a highly non-Gaussian case with m = 6. There we have taken $y_{2}^{2} = 1$ as a typical value of its variance. As is seen there, the agreement between the true bivariate t-distribution and the Gaussian mapping is striking. But the Edgeworth expansion works only in the small vicinity of y1 = 0.

Fig. 3.

Locally optimal statistics for Student’s t-distribution (solid line), its Edgeworth expansion (short dashed line), and the result of Gaussian mapping (long dashed line) for the case m = 6 and $y_{2}^{2} = 1$. The vertical axis is expressed in unit of $\widetilde{{\hat{h}}}_{1}$/$\tilde{\sigma }$.

6. Conclusion

In the present paper, we have considered ways to handle with non-Gaussian natures of detector noises for the detection of gravitational waves with the forthcoming KAGRA or other large-scale laser interferometers. After reviewing the standard theory of hypothesis testing and matched filter technique used in the conventional analysis of gravitational waves assuming Gaussian noises, we have presented two ways to calculate the likelihood ratio or the locally optimal statistic, which plays the central role in hypothesis testing, in the presence of stationary non-Gaussian noises.

One is the Edgeworth expansion which can incorporate weak deviation from Gaussian noises, and it has been used in various problems near the regime the central limit theorem works17) as well as in the weakly nonlinear evolution of density fluctuations in the Universe.18)

The other is a new method which we call Gaussian mapping. In this formalism we first determine the single-time or the marginal probability distribution of noises using training samples as well as the two-time correlation function. Then we can formulate the likelihood ratio which fully incorporates these two data making use of mapping to Gaussian distribution. Since observational determination of higher-order correlation function is a formidable task, in this sense, this method makes the most use of the observationally available data of noise distribution. This method can also be applied to the analysis in frequency domain if each normalized Fourier mode satisfies the same mono-variate distribution function.

Applying these two methods for a specific non-Gaussian distribution called Student’s t-distribution, which mimics actual noise distribution function in the sense that it is a symmetric distribution with a much heavier tails than Gaussian, we have shown that the Edgeworth expansion works well if and only if deviation from Gaussian is small but the formula based on the Gaussian mapping works very well even in highly non-Gaussian case.

These methods attempt to treat the stationary non-Gaussian parts of noises by frontal attack, but there is another approach which makes use of non-Gaussianity to separate signals from noises known as the independent component analysis.19)21) This method has not been used for the data analysis of gravitational waves so far since we need detailed information of envirometers such as seismograph which measure various independent noises. In a forthcoming paper22) we shall consider application of this method to the data analysis for the first time. We then plan to apply the analytic results of the present and the next papers to actual noises observed by existent detectors. By pursuing various analysis methods we shall prepare for the completion of KAGRA detector toward the first direct detection of gravitational waves.

Acknowledgements

The author is grateful to Yousuke Itoh, Nobuyuki Kanda, Masaki Ando, and Yuhei Miyamoto for useful communications. This work was partially supported by the Grant-in-Aid for Scientific Research on Innovative Areas No. 25103504.

References
 
© 2014 The Japan Academy
feedback
Top