Introduction
The testing of soils for molybdenum (Mo) and other micronutrients has been reviewed extensively in recent publications (Gupta and Lipsett, 1981; Anderson and Mortvedt, 1982; Cox, 1987; Johnson and Fixen, 1990; Sims and Johnson, 1991; Sims, 1996). The general objectives for testing soils for any nutrient have been to assess the soil's capacity to supply plant-available nutrient during the growth of crops and to gather data that can guide producers in obtaining the best economic response to fertilizer application. Fitts and Nelson (1956) suggested that soil testing can be divided into four phases: (1) sampling the soil, (2) conducting tests to determine nutrient availability, (3) calibrating test findings with crop responses, and (4) interpreting the findings and making recommendations.
Because of the relatively small amounts of Mo in soils (0.1–30mg kg–1) (Kubota, 1976), the importance of seed Mo reserves in supplying crop needs (Peterson and Purvis, 1961; Harris, Parker, and Johnson, 1965; Gurley and Giddens, 1969), the importance of soil properties that affect Mo availability (Lowe and Massey, 1965; Massey, Lowe, and Bailey, 1967; Karimian and Cox, 1978,1979; Burmester, Adams, and Odom, 1988), and the low requirements of most crops for Mo (0.1–0.5 mg kg–1 tissue), the testing of soils for Mo in the classic sense is rendered difficult.