Published online by Cambridge University Press: 15 November 2002
This paper deals with semiparametric convolution models, where thenoise sequence has a Gaussian centered distribution, with unknownvariance. Non-parametric convolution models are concerned with the case of anentirely known distribution for the noise sequence, and they havebeen widely studied in the past decade. The main property of thosemodels is the following one: the more regular the distribution of thenoise is, the worst the rate of convergence for the estimation of thesignal's density g is [5]. Nevertheless, regularity assumptionson the signal density g improve those rates of convergence [15].In this paper, we show that whenthe noise (assumed tobe Gaussian centered) has a varianceσ2 that is unknown (actually, it is always the case inpractical applications), the rates of convergence for the estimation ofg are seriously deteriorated, whatever its regularity is supposed to be.More precisely, the minimax risk for the pointwise estimation of g over aclass of regular densities is lower bounded by a constant over log n. We construct two estimators of σ2, andmore particularly, an estimator which is consistent as soon as the signal hasa finite first order moment.We also mention as a consequence the deterioration of therate of convergence in the estimation of the parameters in the nonlinearerrors-in-variables model.