This paper deals with semiparametric convolution models, where the
noise sequence has a Gaussian centered distribution, with unknown
variance. Non-parametric convolution models are concerned with the case of an
entirely known distribution for the noise sequence, and they have
been widely studied in the past decade. The main property of those
models is the following one: the more regular the distribution of the
noise is, the worst the rate of convergence for the estimation of the
signal's density g is [5]. Nevertheless, regularity assumptions
on the signal density g improve those rates of convergence [15].
In this paper, we show that when
the noise (assumed to
be Gaussian centered) has a variance
σ2 that is unknown (actually, it is always the case in
practical applications), the rates of convergence for the estimation of
g are seriously deteriorated, whatever its regularity is supposed to be.
More precisely, the minimax risk for the pointwise estimation of g over a
class of regular densities is lower bounded by a constant over log n.
We construct two estimators of σ2, and
more particularly, an estimator which is consistent as soon as the signal has
a finite first order moment.
We also mention as a consequence the deterioration of the
rate of convergence in the estimation of the parameters in the nonlinear
errors-in-variables model.