1. It often happens that we have a series of observed data for different values of the argument and with known standard errors, and wish to remove the random errors as far as possible before interpolation. In many cases previous considerations suggest a form for the true value of the function; then the best method is to determine the adjustable parameters in this function by least squares. If the number required is not initially known, as for a polynomial where we do not know how many terms to retain, the number can be determined by finding out at what stage the introduction of a new parameter is not supported by the observations*. In many other cases, again, existing theory does not suggest a form for the solution, but the observations themselves suggest one when the departures from some simple function are found to be much less than the whole range of variation and to be consistent with the standard errors. The same method can then be used. There are, however, further cases where no simple function is suggested either by previous theory or by the data themselves. Even in these the presence of errors in the data is expected. If ε is the actual error of any observed value and σ the standard error, the expectation of Σε2/σ2 is equal to the number of observed values. Part, at least, of any irregularity in the data, such as is revealed by the divided differences, can therefore be attributed to random error, and we are entitled to try to reduce it.