The tuning of musical instruments has kept music theorists busy since antiquity. It is a commonplace – although no less true for that – to say that each period in the history of music has had its own theory of tuning in order to meet its own musical needs. Likewise, the quantitative language used to calculate and represent these various tuning systems has changed. In the medieval and Renaissance periods, theories of tuning were usually formulated in terms of relative string lengths on a monochord, to be calculated by arithmetic methods.
From the end of the sixteenth century, until around 1800, string lengths remained in use by theorists, but their calculations were often refined by the use of mathematical tools such as root extraction. With root extraction, the various equal and unequal temperaments that dominated theory and practice from the sixteenth century onwards could be adequately described. Musically, this meant that intervals of any size could be divided into equal parts. (This was possible with arithmetic methods in exceptional cases only.) At some point in the seventeenth century, logarithmic measures of pitch were added to the common string-length values, by which a psychologically more realistic picture of the relations among pitches could be presented. Logarithms facilitated the description and calculation of virtually any tuning system conceivable.
Tuning and temperament theory was especially developed by eighteenth-century German authors. They used a variety a methods to describe a great number of tuning systems, both equal and unequal. From about 1800, string lengths were progressively replaced by frequency values to indicate pitches, making it possible to establish empirically the relations between theory and practice.