next up previous
Next: The nnr Function Up: Non-Parametric Nonlinear Regression Models Previous: Non-Parametric Nonlinear Regression Models

Model and Estimation

In model ([*]) we have assumed that the function $f$ is observed through linear operators $L_i$'s plus random errors. Sometimes the function is observed indirectly which involves nonlinear operators (O'Sullivan, 1990; O'Sullivan and Wahba, 1985; Wahba, 1990; O'Sullivan, 1991; Wahba, 1987).

We consider the following non-parametric nonlinear regression (NNR) model

$\displaystyle y_i=\eta(\mbox{\boldmath$f$}; \mbox{\boldmath$t$}_i) + \epsilon_i, ~~~~ i=1, \cdots, n ,$     (30)

where $\eta$ is a known function of $\mbox{\boldmath$t$}_i=(t_{1i}, \cdots,
t_{di})$ in an arbitrary domain ${\cal T}$, $\mbox{\boldmath$f$}=(f_1,\cdots,f_q)$ is a vector of unknown non-parametric functions which act nonlinearly as parameters of the function $\eta$, and $\mbox{\boldmath$\epsilon$}=(\epsilon_1,\cdots,\epsilon_n)^T$ are random errors distributed as $\mbox{N} ({\bf0}, \sigma^{2} W^{-1})$. The functions $f_j$'s could have the same or different domains. We denote the model space of $f_j$ as
$\displaystyle {\cal H}_j={\cal H}_{j0}\oplus \sum_{k=1}^{p_j}{\cal H}_{jk} .$     (31)

Let $\mbox{\boldmath$y$}=(y_1,\cdots,y_n)^T$ and $\mbox{\boldmath$\eta$}=(\eta(\mbox{\boldmath$f$};\mbox{\boldmath$t$}_1),\cdots,\eta(\mbox{\boldmath$f$};\mbox{\boldmath$t$}_n))^T$. We estimate $\mbox{\boldmath$f$}$ as the minimizer of the following penalized weighted least squares

$\displaystyle (\mbox{\boldmath$y$}-\mbox{\boldmath$\eta$})^T W (\mbox{\boldmath...
...um_{j=1}^q \sum_{k=1}^{p_j}
\theta_{jk}^{-1} \vert\vert P_{jk}f_j\vert\vert^2 ,$     (32)

where $P_{jk}$ is the orthogonal projection operator of $f_j$ onto ${\cal H}_{jk}$ in ${\cal H}_j$.

In the following we consider the special case when

$\displaystyle \eta(\mbox{\boldmath$f$};\mbox{\boldmath$t$}_i)=h(L_{1i}f_1,\cdots,L_{qi}f_q),$     (33)

where $h$ is a known nonlinear function, $L_{ji}$'s are linear operators. ([*]) holds for most applications and $L_{ji}$'s are usually the evaluational functionals. When ([*]) does not hold, using linearization method, we can approximate $\eta(\mbox{\boldmath$f$};\mbox{\boldmath$t$}_i)$ by a linear combination of linear operators.

When ([*]) holds, the solutions to ([*]) have the form ([*]). Specifically,

$\displaystyle \hat{f}_j(\mbox{\boldmath$t$}) = \sum_{l=1}^{M_j} d_{jl} \phi_{jl...
...um_{i=1}^n c_{ji} (\sum_{k=1}^{p_j}\theta_{jk} \xi_{kji}(\mbox{\boldmath$t$}) ,$     (34)

where $\phi_{jl},~l=1,\cdots,M_j$ are bases of ${\cal H}_{j0}$, $\xi_{kji}(\mbox{\boldmath$t$}) = L_{ji} R_{jk(\cdot)}(\mbox{\boldmath$t$},\cdot)$, and $R_{jk}$ is the rk of ${\cal H}_{jk}$. We estimate coefficients $d_{ji}$'s and $c_{jl}$'s using ([*]) with $f_j$'s being replaced by ([*]). Since $h$ in ([*]) is nonlinear, an iterative method has to be used to solve these coefficients. Two methods are used: the Gauss-Newton and Newton-Raphson procedures. See Ke and Wang (2002) for more details.


next up previous
Next: The nnr Function Up: Non-Parametric Nonlinear Regression Models Previous: Non-Parametric Nonlinear Regression Models
Yuedong Wang 2004-05-19