Skip to main content Accessibility help
×
Home

Information:

  • Access

Actions:

      • Send article to Kindle

        To send this article to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about sending to your Kindle. Find out more about sending to your Kindle.

        Note you can select to send to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

        Find out more about the Kindle Personal Document Service.

        Refining and implementing the Food Assortment Scoring Tool (FAST) in food pantries
        Available formats
        ×

        Send article to Dropbox

        To send this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Dropbox.

        Refining and implementing the Food Assortment Scoring Tool (FAST) in food pantries
        Available formats
        ×

        Send article to Google Drive

        To send this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Google Drive.

        Refining and implementing the Food Assortment Scoring Tool (FAST) in food pantries
        Available formats
        ×
Export citation

Abstract

Objective

Hunger relief agencies have a limited capacity to monitor the nutritional quality of their food. Validated measures of food environments, such as the Healthy Eating Index-2010 (HEI-2010), are challenging to use due to their time intensity and requirement for precise nutrient information. A previous study used out-of-sample predictions to demonstrate that an alternative measure correlated well with the HEI-2010. The present study revised the Food Assortment Scoring Tool (FAST) to facilitate implementation and tested the tool’s performance in a real-world food pantry setting.

Design

We developed a FAST measure with thirteen scored categories and thirty-one sub-categories. FAST scores were generated by sorting and weighing foods in categories, multiplying each category’s weight share by a healthfulness parameter and summing the categories (range 0–100). FAST was implemented by recording all food products moved over five days. Researchers collected FAST and HEI-2010 scores for food availability and foods selected by clients, to calculate correlations.

Setting

Five food pantries in greater Minneapolis/St. Paul, Minnesota, USA.

Subjects

Food carts of sixty food pantry clients.

Results

The thirteen-category FAST correlated well with the HEI-2010 in prediction models (r = 0·68). FAST scores averaged 61·5 for food products moved, 63·8 for availability and 62·5 for client carts. As implemented in the real world, FAST demonstrated good correlation with the HEI-2010 (r = 0·66).

Conclusions

The FAST is a flexible, valid tool to monitor the nutritional quality of food in pantries. Future studies are needed to test its use in monitoring improvements in food pantry nutritional quality over time.

An estimated 46·5 million US households annually rely on a hunger relief network as a source of food to alleviate food insecurity( 1 ). Within the US context, food pantries serve as the main interface between food-insecure clients (who lack access to enough food for an active, healthy life for all household members) and larger-scale food distributers (food banks), the majority of which are part of a nationwide network (Feeding America). Pantry clients have particularly high rates of diet-sensitive chronic disease( 1 , 2 ), with over one-third of households reporting a member with diabetes( 1 ). The overall dietary pattern of clients is suboptimal( 3 , 4 ); while no nationally representative studies have been conducted in the USA, evidence suggests that common deficits for pantry clients may include fruits, vegetables and a number of micronutrients( 4 7 ).

There are no nutritional standards for food distributed within the US hunger relief system, which relies heavily upon retail donations and commodity surplus for its food inventory. When measured in recent years, pantries have demonstrated room for improvement( 8 ) and face particular challenges in offering adequate fruit, dairy and whole grains( 8 ). Assessments with hunger relief agencies indicate their desire to monitor their food quality through automated processes( 9 ), but adopting new practices is challenging. Pantries have a limited capacity to track food from multiple sources (food bank orders, retail rescue, donations) as it moves through the pantry. Functioning with scarce resources, pantries have few opportunities to overhaul their existing practices( 10 ).

A review of existing nutrition monitoring tools used by food banks yields several possible options for pantries, but all have limitations. The CHOP Nutrient Analysis Tool, which has been successfully implemented and widely disseminated( 11 , 12 ), assigns a nutrient value to each food, ranking them according to nutritional quality in eleven food categories (e.g. fruit, dairy, cereal)( 11 ). However, CHOP functions through a food bank’s agency ordering system, so it excludes food procured outside the ordering system. Although CHOP is useful for comparing the nutritional value of two similar products, it cannot be used to assess the overall nutritional quality of a group of foods or to track nutritional quality over time (Greater Pittsburgh Community Food Bank, personal communication, 15 February 2016). Other nutritional measures that could be options for hunger relief agencies with more resources may not be accessible to smaller pantries. For instance, acquiring a proprietary tool like NuVal( 13 ) would likely be too expensive and limits in staff capacity make it difficult to implement a complex scoring system aligned with US dietary guidelines( 10 , 13 , 14 ).

One such tool, the US Department of Agriculture’s (USDA) Healthy Eating Index-2010 (HEI-2010), was tested as a means for pantries to monitor the nutritional quality of their food( 8 ). The HEI-2010 is an updated version from HEI-2005 and assesses how well a set of foods aligns with the Dietary Guidelines for Americans( 15 17 ). Scores are calculated by summing twelve nutrient subcomponents for a total score of 0–100 according to USDA standards, with higher scores indicating better alignment with recommendations( 15 ). HEI-2010 is based on nutrient density, so it can assess any set of foods, and scores are comparative across levels of the food system( 18 ). Therefore, it is useful for measuring the food retail environment( 19 22 ) and, more recently, the hunger relief system( 8 ).

The HEI-2010 calculation is, however, quite complex, involving a time-intensive coding system, nutritional conversions and researcher assistance( 10 ), making it unsustainable for pantries to use for ongoing self-monitoring. Moreover, HEI-2010 calculations used in the previous study( 8 ) were based on ordering receipts only, since the cost of sorting and coding precluded the inclusion of other sources of food. Finally, more than 25 % of food ordered was in ‘miscellaneous’ categories excluded from the HEI-2010 calculations due to limited information. These limitations necessitate a simplified system for pantries to use with all their procurement sources.

To address these limitations, King et al. designed and tested an alternative nutritional quality index for the hunger relief system( 10 ). Their goal was to develop an index that: (i) is easier to implement than the HEI-2010; (ii) can be used across the hunger relief system to track food from different sources over time; (iii) is transparent to users; and (iv) correlates well with the HEI-2010( 10 ). The Hunger Relief Nutrition Index (HRNI) is defined by the expression:

$$HRNI\,{\equals} \mathop \sum\limits_{h{\equals}1}^{12} \mathop \sum\limits_{i{\equals}1}^n \beta _{{hi}} GWS_{i} ,$$

where h is an HEI-2010 component index, i is a food category index, GWS i is the gross weight share for food category i, and β hi is a parameter for HEI-2010 component h and food category i. The gross weight share for a category is the gross weight of food in the category divided by the total gross weight for the food assortment being scored. The gross weight is the weight of food including its packaging – for example, the cans of canned fruit – as opposed to the net weight, which only includes the food inside the packaging.

King et al.( 10 ) estimated the food category parameters by performing the regression of monthly HEI-2010 component scores for food pantry orders v. the associated food category gross weight shares. They calculated correlations between out-of-sample forecasts and HEI-2010 scores to assess the performance of the index, which yielded high correlation levels (r > 0·75).

The HRNI has the potential to remedy several limitations of the HEI-2010 in the hunger relief system. It requires only that agencies sort foods into a fixed number of categories, rather than linking all inventory items to their nutrition labels. However, the previous study( 10 ) did not test whether: (i) implementation of the HRNI into the eighteen to thirty-two category schemes tested was feasible and acceptable for pantries; and (ii) the correlation between HEI-2010 and HRNI would be similarly high when these measures were generated in an actual food pantry setting.

The current study revised the HRNI food categories, adjusted the index parameter estimation procedures, and assessed the feasibility and validity of the revised index. We also identified sub-categories and developed procedures for adjusting category parameters to reflect changes within category assortments. Finally, the project team worked with five pantries to implement this new healthfulness index and determine its real-world correlation with the HEI-2010.

Methods

Developing FAST categories

Primary categories

King et al. recognized that specification of food categories is a critical design choice because it affects both ease of implementation and the responsiveness of the HRNI measure to differences in the healthfulness of food assortments. Ease of implementation declines with the increase in food categories. Responsiveness of the index generally increases with the number of food categories used to construct it, but King et al. found that the index with eighteen combined Consumer Expenditure Survey (CEX) categories performed comparably to the full index set of thirty-two Food Bank Product Type Code (FBC) categories( 10 ).

In an iterative process spanning several months, pantry representatives and researchers met to determine a revised food category scheme that was feasible for pantries to implement. Pantry representatives were identified, invited and hosted by our partnering local food bank, and consisted of paid staff (n 5) and one volunteer from five pantries in the Minneapolis/St. Paul metropolitan area, Minnesota. Pantry representatives were selected based on their interest and readiness to adopt a healthfulness measure and/or previous demonstration of commitment to health initiatives in their food pantry.

The revised index was renamed based on feedback from users and is hereafter called the Food Assortment Scoring Tool (FAST). In preliminary discussions with pantry representatives, there was a clear concern about the potential number of food categories needed. Moreover, the importance of maintaining coherence between FAST categories and those already in place for inventory tracking by the food banks was also apparent to the project team.

For three sets of category schemes discussed with food pantry representatives, researchers estimated food category parameters and calculated out-of-sample forecasts. Version 1 of the tool had fifteen categories. In an attempt to simplify further, Version 2 reduced the number of categories to thirteen by combining Desserts with Snacks and combining Cooking with Condiments. Version 3 made additional refinements that representatives suggested would simplify sorting, including: (i) moving 100 % fruit juice to Beverages from Processed Fruits/Vegetables; (ii) moving broth to the soup category from Cooking/Condiments; (iii) moving flour to Cooking/Condiments from the grain categories; (iv) moving margarine/butter to Dairy from Cooking/Condiments; (v) adding baking mixes to Cooking/Condiments from Desserts/Snacks; and (vi) removing water from the total weight. Pantry representatives approved the third and final FAST version with thirteen scored categories plus water.

The thirteen-category FAST is described in the first two columns of Table 1. In general, categories are broad for less nutritious items. For instance, whether sweet or salty, desserts and snacks both contribute few nutrients and would therefore have a downward effect on FAST scores. Keeping categories broad is generally more desirable for food pantries. Conversely, the final FAST categories make several category distinctions that reflect nutritional priorities – for instance, distinguishing between whole grains and non-whole grains, and between lean and processed meats – that are necessary to maintain good construct validity.

Table 1 FAST categories, descriptions and sub-categories

FAST, Food Assortment Scoring Tool.

We further collaborated with pantry representatives to develop a sorting protocol with a detailed list of examples of how specific food items should be categorized; an extensive list is provided in the online supplementary material, Supplemental Table 1. Priority was placed on sorting rules that were practical and intuitive for food pantry staff and volunteers. For example, all juices were put in one category (beverages), rather than sorting 100 % fruit juices into the fruit category.

Finally, in a separate process, the research team reviewed the categories with a regional food bank in the Feeding America network to ensure that the categories were reasonably aligned with their existing national food codes, called UNC categories. The existing thirty-three UNC categories can be mapped to the thirteen FAST categories. In most cases, UNC categories collapsed into a single FAST category. For example, the two UNC categories for condiments, Spices–Condiments–Sauces and Dressings, were collapsed into the FAST category Baking/Condiments. Occasionally, UNC categories were split, as with the UNC Bread–Bakery, which was separated into Whole Grain Bread–Bakery and Non-Whole Grain Bread–Bakery for the FAST measure.

Sub-categories

As noted by King et al.( 10 ) (p. 546), the tool’s design assumes that the assortment of foods within categories is consistent across sources and time. With a decrease in categories, the potential for variation in the assortment within categories increases and may lessen the reliability of scores. To allow adjustments for FAST parameters to reflect changes in the assortment of foods within categories, we also defined thirty-one sub-categories in the last column of Table 1. Often the sub-category definitions reflect nutritional differences; e.g. salty v. sweet desserts and snacks. Other times, the sub-category definitions reflect differences in form or packaging; e.g. canned v. dried processed fruits and vegetables. The sub-categories for each primary category are mutually exclusive and exhaustive.

Data for estimating parameters

As described by Nanny et al.( 8 ), all food items ordered by pantries from two food banks – Second Harvest Heartland (SHH) and The Food Group – between 1 January 2013 and 30 March 2015 were coded to food categories in the Food Patterns Equivalent Database to calculate HEI-2010 scores. For the King et al.( 10 ) study, all the food items ordered from SHH during this same period were also coded to the CEX and FBC categories. These items were also coded with the categories and sub-categories defined in Table 1 for this analysis. The final data set includes 5786 ‘food pantry month’ observations with data on the HEI-2010 score and gross weights for food ordered in each of the FAST categories and sub-categories.

Estimation procedures

King et al.( 10 ) estimated index parameters for individual components of the HEI-2010, used the index to calculate component scores and then summed component scores for an overall index. While there is the advantage of transparency to the index, two problems arise because the HEI-2010 component scores and predictions are often truncated at either zero or the upper bound for the component. First, it requires a more complex estimation method, Tobit. Second, the weighted average procedure described by King et al.( 10 ) (p. 539) may not be valid when scores are truncated, making it difficult to combine index scores from different times or sources.

Therefore, we estimated parameters for an overall FAST score only. Since there were no extreme HEI-2010 values of 0 or 100 in the estimation data set, it was appropriate to use ordinary least-squares regression procedures to estimate parameters based on the same data set of monthly HEI-2010 scores and corresponding assortments used by King et al.( 10 ). Using notation consistent with the earlier paper, the regression model is:

(1) $$HEI_{{kt}} \,{\equals} \mathop \sum\limits_{i{\equals}1}^n \delta _{i} \left( {{{w_{{ikt}} } \over {TW_{{kt}} }}} \right){\plus}\, {\varepsilon}_{{kt}} ,$$

where HEI kt is the HEI-2010 score for pantry k in month t, w ikt is the weight of food in category i for pantry k in month t, TW kt is the total weight of food for pantry k in month t, δ i is a parameter to be estimated for food category i, and ε kt is an error term for pantry k in month t. We estimate this model for all the sub-categories.

King et al.( 10 ) used data from 2013 for parameter estimation and data from 2014 and early 2015 for out-of-sample forecasting. Out-of-sample forecasts are useful in evaluating alternative category schemes because they indicate how the FAST will perform with new data that were not used to estimate the parameters. In the present study, we used from 2014 and early 2015 for parameter estimation and from 2013 for out-of-sample forecasting since there were no observed orders in 2013 for some sub-category foods. We calculated simple correlations between out-of-sample forecasts and observed HEI-2010 scores. For implementation, we estimated parameters using the entire data set.

Pantries implemented FAST using the thirteen primary food categories. Primary category parameters are estimated by a weighted average of the corresponding sub-category parameters, using the following formula:

(2) $$\Delta _{j} \,{\equals}\mathop \sum\limits_{m{\equals}1}^M \delta _{{mj}} \left( {{{sw_{{mj}} } \over {CW_{j} }}} \right),$$

where Δ j is the parameter for primary category j, δ mj is the sub-category parameter for sub-category m within primary category j, and sw mj and CW j are the total observed weights of food across all sample observations for sub-category m and for the entire category j, respectively. The following expression is then used to calculate the FAST score for any food assortment:

(3) $$FAST\,{\equals} \mathop \sum\limits_{j{\equals}1}^{13} \Delta _{j} \left( {{{w_{j} } \over {TW}}} \right),$$

where w j is the gross weight of food in category j and TW is the total gross weight of the assortment.

Implementation

FAST implementation at the pantry involved: (i) sorting foods into the defined categories; (ii) weighing the foods in each category; (iii) calculating a gross weight share for each category by dividing its gross weight by the total weight of all scored foods; (iv) multiplying each gross weight share by a model parameter that reflects its healthfulness; and (iv) summing together the categories for a total score (range 0–100).

At each pantry, researchers trained a lead team of one to three key staff and volunteers representing those most responsible for stocking food, using protocols developed by the study team. The lead team communicated FAST procedures to other staff and volunteers who occasionally stocked food. Over five consecutive days, pantry staff and volunteers at five pantries sorted and weighed the food that moved onto the shelves from storage areas (‘flow’). The resulting data were used to calculate a flow FAST score of the nutritional quality of food available to clients.

Implementation of the FAST tool over five days was completed by all pantries. Procedures were customized at each site. In four pantries, the FAST score was measured for all food that moved from the warehouse or storage area to the shelf, where it was available to clients. In the remaining pantry, which mostly distributed pre-packed bags, volunteers measured the food that came into the pantry over five days. Other procedural customizations included the frequency of restocking (e.g. continuous restocking v. once per day), the reliance on volunteers (v. staff) and the data recording method (e.g. the placement of the weight-tracking logs). In most cases, sorting was a not a daily task with a finite start and end; rather it was done in small batches as food was being brought to the shelf. Thus, the amount of time that sorting adds, above and beyond normal stocking and weighing procedures, was not easily measurable.

Pantries were diverse in their operations, staffing procedures and dependence on volunteers. They reported serving between twenty and sixty-five clients per day, were open between 17 and 32·5 h per week and relied on zero to fifty volunteers per day. In the month of the present evaluation, pantries distributed between 11 851 kg (26 129 lb) and 55 142 kg (121 568 lb) of food.

After the FAST was finalized and successfully implemented, an interactive tool was created (available at http://www.healthdisparities.umn.edu/research-studies/hitt-4-health).

Evaluation

The evaluation’s purpose was to determine the degree of correlation between FAST and HEI-2010 scores using the set of foods observed during field testing in five food pantries. Data collection took place over five consecutive days when the pantry was open, from June to August 2016. Data were collected at the level of the client food cart (n 60) and the pantry (n 5). The study was conducted according to the guidelines laid down in the Declaration of Helsinki and all procedures involving human subjects were approved by the University of Minnesota.

Client measures

The FAST ‘client cart’ measures were collected over three days during the evaluation week, totalling twelve clients at each pantry. Clients were recruited from the pantry after they had selected their food (to avoid influencing choice). Participants were eligible if they were ≥18 years old and mentally capable of consent. Non-English speakers were eligible if a family or staff member was available to translate study materials. During recruitment, all clients exiting the pantry were approached, unless the data collectors were occupied with another assessment. After giving informed consent, clients completed a survey asking about demographic information and use of the pantry, other food assistance programmes and grocery stores. Participants were offered a $US 10 gift card. The participation rate was 66 %.

During the survey, data collectors recorded FAST and HEI-2010 measures for the food the participants obtained at their visit. For the FAST measure, data collectors sorted the foods into categories and weighed them on a scale to obtain gross weights. For the HEI-2010, they recorded detailed descriptions of each product, including name, type (dry, frozen, refrigerated), brand, additional descriptors (e.g. low sodium, heavy syrup), net product weight and total count of the item. Most pre-packaged items were recorded by photographing to capture the product information which was subsequently entered into a database. For items like loose produce, each product was weighed and the product information was entered directly into the database.

Pantry measures

The FAST ‘availability’ measure was a one-time snapshot reflecting food on the shelf available to the client during the evaluation week. Availability was assessed when the pantry was closed, but adequately stocked. Food on the shelf was moved to a scale and weighed. Gross weights (minus the weights of boxes and carts) were recorded in the appropriate FAST category and returned to the shelf. The process was quick because whole sections of food that fell in the same FAST category were often placed near each other (e.g. condiments, dairy) and could all be moved and weighed together.

A complete one-time inventory of food available on the shelf was done concurrently with the FAST assessment to collect HEI-2010 data. Data collectors recorded the item details for each product, net product weight and the exact count of a product. For pre-packaged items, data were obtained from package labels. Unpackaged items like produce were weighed with the container weight (e.g. bin, cart) subtracted. One pantry had a slightly modified procedure: (i) the FAST inventory measure was collected by pantry staff rather than data collectors; and (ii) since this pantry was large and inventory could only be conducted after hours, the inventory was conducted in stages (i.e. fruits and vegetables only on Day 1, shelves only on Day 2, bakery only on Day 5).

Analysis

A registered dietitian matched the foods from the inventory (or food pantry clients’ carts) to product descriptions in the USDA’s Food and Nutrient Database for Dietary Studies 5.0( 23 ). A second registered dietitian independently performed the matching with a 10 % sample of food products, and all disagreements were discussed and reconciled. Net food weights were converted to grams. When foods were recorded in volumes, products were converted using USDA’s National Nutrient Database for Standard Reference. Publicly available SAS macros provided by the National Cancer Institute were used to calculate the HEI-2010( 24 ). After FAST and HEI-2010 scores were calculated, their correlation was calculated for three sets of observations: availability (n 5), client carts (n 60) and total (n 65).

Results

FAST parameters

From the estimation procedures described above, we estimated FAST sub-category parameters for the primary categories presented in Table 1. Using 3151 pantry orders placed between 1 January 2014 and 30 March 2015, FAST scores were generated for 2636 pantry orders placed between 1 January 2013 and 31 December 2013. The correlation between the out-of-sample FAST scores based on the sub-category parameters and the corresponding HEI-2010 scores was 0·75. The correlation between the out-of-sample FAST scores based on the primary category parameters and the corresponding HEI-2010 scores was 0·68. The correlation for the primary category scheme is lower because it assumes that the share of gross weight for foods in sub-categories within a category is always equal to the average share. For comparison, out-of-sample correlations for alternative category schemes are 0·68 for the FBC categories and 0·67 for the combined CEX categories used in King’s et al.’s( 10 ) study.

Table 2 presents parameter estimates for FAST sub-categories and primary categories along with assumed sub-category shares for each primary category. Nearly all sub-category parameter estimates are highly significant and magnitudes for parameter estimates correspond well with expectations. For example, the parameter estimate for dried fruit, 155·5, is much higher than the parameter estimate for canned fruit, 87·9, since dry fruits have a higher nutritional density than canned fruits when considering packaging and water content. Similarly, both these parameters are much higher than parameters for salty and sweet desserts and snacks of 35·9 and 29·0 respectively. Since FAST parameters account for both the nutritional quality of the food in the category and the average weight of items in the category simultaneously, parameters cannot be directly compared with one another.

Table 2 Parameter estimates for FAST sub-categories and primary categories along with assumed sub-category shares for each primary category; data from 5786 food pantry orders in Minnesota, USA from January 2013 to March 2015

FAST, Food Assortment Scoring Tool.

The primary category parameters reflect both the sub-category parameters and the sub-category shares. For example, the primary category parameter for the fresh fruits and vegetables category is calculated as:

(4) $$63\cdot4\,{\equals}\,\left( {61\cdot2} \right)\left( {0 \cdot871} \right){\plus}\left( {78 \cdot 4} \right)\left( {0 \cdot 129} \right).$$

Here, the primary category parameter is closer to the sub-category parameter for fresh vegetables because the sub-category share for fresh vegetables is larger. In another setting, if the sub-category shares within this category were equal, the primary category parameter for fresh fruits and vegetables would rise to 69·8. This illustrates how the FAST sub-category parameters are adaptable to new circumstances. Unfortunately, determining sub-category shares can be time- and resource-intensive.

FAST scores

Table 3 displays the gross weight shares for foods by category. Fresh fruits and vegetables have high but also highly variable gross weight shares across the measures, comprising 12 % of the availability weight, 21 % of the client cart weight and 25 % of the flow weight. All other categories had less variation in gross weight shares across measures. Whole grains comprised 3 % of the total weight of foods across all measures, whereas non-whole grains comprised 9–11 %. Total FAST scores were, on average, similar for availability measures (average 63·8, range 55·8–72·2), client carts (average 62·5, range 50·3–79·6) and flow (average 61·5, range 54·9–66·0).

Table 3 Food pantry FAST scores and gross weight shares for each category*; data from implementation of FAST in five food pantries and observation of food carts of sixty food pantry clients over five days in greater Minneapolis/St. Paul, Minnesota, USA, June–August 2016

FAST, Food Assortment Scoring Tool; GWS, gross weight share.

* Percentages may not add to 100 % due to rounding.

Validation

Average HEI-2010 scores for availability, client carts and overall measures are presented in Table 4. HEI-2010 scores were 7·1 points higher than average FAST scores for both availability and client carts.

Table 4 Correlation between FAST and HEI-2010 measures; data from implementation of FAST in five food pantries and observation of food carts of sixty food pantry clients over five days in greater Minneapolis/St. Paul, Minnesota, USA, June–August 2016

FAST, Food Assortment Scoring Tool; HEI-2010, Healthy Eating Index-2010.

Correlations between HEI-2010 and FAST are also presented in Table 4. The correlation between availability measures was 0·80. For client measures, it was 0·66. The overall correlation was 0·66.

Discussion

Building on the work of King et al. demonstrating that a nutrition index based on gross product weight could serve as a useful instrument for monitoring nutritional quality in the hunger relief setting( 10 ), the present study refined a thirteen-category FAST index with input from six food pantry representatives and two food banks. Implementation of the FAST in a diverse set of five pantries was feasible (i.e. foods were sorted, weighted and recorded by food pantry staff and volunteers for five consecutive days) and the measure correlated well with HEI-2010 scores at different stages of food distribution.

FAST has several similarities to the USDA’s HEI-2010. The score ranges from 0 to 100 and is derived from food categories that prioritize fruits, vegetables, whole grains and dairy. However, HEI-2010 uses nutrient density information, while the FAST calculations are based on gross product weights. This alteration is particularly important for pantries, where donations, procurement and food distributed are usually tracked by gross weight. Another key difference is that HEI-2010 subcomponents parse out specific nutrients (e.g. fatty acid ratio, sodium), while FAST categories are based only on whole food products (e.g. beverages, desserts and snacks). While FAST does not provide subcomponent scores, it does calculate the proportion of food in each category by weight, which can be compared across agencies or tracked over time. In short, as it is designed to do( 10 ), the FAST compromises a degree of precision and detail as a trade-off for its transparency and simplicity.

Establishing a comprehensible measurement system in pantries can prevent errors and allow pantries to use their scores in ways that are meaningful to them; for example, to report their scores to funding agencies and donors. FAST is also an improvement from the HEI-2010 as it can calculate a weighted average of scores from different sources. Thus, it can be easily used in procurement to aggregate foods from many sources. The successful implementation of a valid measure like FAST could yield widespread attention to nutritional quality in the hunger relief system and a method for widespread monitoring. Because FAST categories mostly align with existing UNC codes, food banks could integrate FAST procedures to measure their own provisions without major overhauls and/or embed FAST scores into ordering receipts from pantries.

King et al.( 10 ) noted that the assortment of foods within a category may change over a period time, which could require parameters to be recalibrated. The estimation of sub-category parameters allows for the formula to be recalculated based on known changes to gross weight shares that might occur over time, among food sources and between contexts. For instance, regional differences in food availably, secular improvements in food bank provisions, the addition of new food procurement sources or major changes in federal commodity foods could all justify recalibration.

Estimates of FAST scores in the current study indicate that they were fairly consistent for availability, client cart and flow measures. Clients disproportionately selected more fruits and vegetables than what was available, indicating a strong demand for these items. This is consistent with a previous study indicating clients’ preferences for healthier options, despite perceptions of hunger relief workers to the contrary( 25 ). Future research might use the FAST tool to explore whether different pantry environments (e.g. healthy food promotions) affect client demand. In exploring different uses for FAST, inventory measures of availability should be used with caution, as they will underestimate the actual outflow of high-turnover items that are replenished more often. For this reason, measuring flow and/or what clients select more accurately reflect provisions.

FAST estimates were consistently lower than HEI-2010 scores, perhaps because the observed HEI-2010 scores were relatively high and FAST scores tend to cluster around the midpoint of the index range. Because this downward bias appears to be systematic and very high scores are difficult to achieve, it may be that FAST is more useful for monitoring change over time than creating a specific standard for hunger relief agencies to meet. For instance, a reasonable goal for an agency might be to improve their FAST scores by a certain percentage, rather than achieving a score of 80. Notably, HEI-2010 scores in the present study were considerably higher than food measures in other settings, which have indicated scores of 55 for the 2010 US food supply( 21 ) and 36 for purchases at corner stores( 26 ). Scores in the current study are similar to, but slightly higher than, previously reported HEI-2010 scores for the hunger relief system( 8 ).

Limitations

There are several limitations of both the FAST and the evaluation presented here. FAST is a relatively blunt instrument compared with HEI-2010; in adopting it, hunger relief agencies must rely on an overall score, since meaningful component scores cannot be calculated with the FAST. FAST scores are generally closer to the midpoint of the index range; as a result, very high and very low scores are uncommon. The measure also requires staff and volunteer training, and includes the physical task of sorting and weighing foods received.

Our evaluation of FAST was limited to tracking scores in a small sample of pantries during one week. While the participating pantries were diverse, it is unclear whether there are limits on the generalizability of the FAST tool in the hunger relief setting. Additionally, it is unclear how the FAST performs in tracking changes over time since the current study calculated FAST at a single point in time. Recalibration of the FAST is possible by changing the gross weight shares of the sub-category parameters presented here, but currently there are no set standards for when recalibration is necessary.

Conclusions

The FAST is a thirteen-category nutrition index based on gross product weight. It shows good potential as a flexible, comprehensible and valid tool that food pantries can use to monitor the nutritional quality of their products. It correlates well with the resource-intensive HEI-2010 and represents a comprehensive measure of the overall nutritional quality of foods supplied and distributed by food pantries. Moreover, it can be customized to local context, implemented and adapted by individual food pantry operators, and allows for agencies to be compared across the hunger relief network. Future studies are needed to test its use over long periods of time.

Acknowledgements

Financial support: This research was supported by a grant from the Target Foundation. The content is solely the responsibility of the authors and does not necessarily represent the official views of the Target Foundation. Data management for this study was supported by the National Institutes of Health (NIH) (grant number UL1TR000114) from the National Center for Advancing Translational Sciences (NCATS). The funders had no role in the design, analysis or writing of this article. Conflict of interest: None. Authorship: C.E.C. was responsible for leading the overall study from which these data originated; contributed to formulation of research questions and led manuscript writing; supported carrying out data collection for this study; assisted in obtaining funding for the study. K.Y.G. contributed to formulation of research questions and writing/revision of the manuscript; supported carrying out the study from which these data originated. Q.W. contributed to analysis; made contributions to writing and revising the manuscript. M.S.N. assisted in obtaining funding for the study; guided and provided feedback on the analysis and interpreting results; contributed to manuscript writing and revisions. R.P.K. contributed to formulation of research questions; led and conducted data analysis; contributed to writing and revision of the manuscript. Ethics of human subject participation: This study was conducted according to the guidelines laid down in the Declaration of Helsinki and all procedures involving human subjects were approved by the University of Minnesota. Verbal informed consent was obtained from all subjects. Verbal consent was witnessed and formally recorded.

Supplementary material

To view supplementary material for this article, please visit https://doi.org/10.1017/S1368980018001362

References

1. Weinfeld, N, Mills, G, Borger, C et al. (2014) Hunger in America 2014. Chicago, IL: Feeding America; available at http://help.feedingamerica.org/HungerInAmerica/hunger-in-america-2014-full-report.pdf
2. Seligman, HK, Lyles, C, Marshall, MB et al. (2015) A pilot food bank intervention featuring diabetes-appropriate food improved glycemic control among clients in three states. Health Aff (Millwood) 34, 19561963.
3. Duffy, P, Zizza, C, Jacoby, J et al. (2009) Diet quality is low among female food pantry clients in Eastern Alabama. J Nutr Educ Behav 41, 414419.
4. Simmet, A, Depa, J, Tinnemann, P et al. (2017) The dietary quality of food pantry users: a systematic review of existing literature. J Acad Nutr Diet 117, 563576.
5. Robaina, KA & Martin, KS (2013) Food insecurity, poor diet quality, and obesity among food pantry participants in Hartford, CT. J Nutr Educ Behav 45, 159164.
6. Bell, M, Wilbur, L & Smith, C (1998) Nutritional status of persons using a local emergency food system program in Middle America. J Am Diet Assoc 98, 10311033.
7. Lenhart, NM & Read, MH (1989) Demographic profile and nutrient intake assessment of individuals using emergency food programs. J Am Diet Assoc 89, 12691272.
8. Nanney, MS, Grannon, KY, Cureton, C et al. (2016) Application of the Healthy Eating Index-2010 to the hunger relief system. Public Health Nutr 19, 29062914.
9. Ross, M, Campbell, EC & Webb, KL (2013) Recent trends in the nutritional quality of food banks’ food and beverage inventory: case studies of six California food banks. J Hunger Environ Nutr 8, 294309.
10. King, RP, Warren, C, Cureton, C et al. (2016) How healthy is hunger relief food? Am J Agric Econ 98, 533548.
11. Seidel, M, Laquatra, I, Woods, M et al. (2015) Applying a nutrient-rich foods index algorithm to address nutrient content of food bank food. J Acad Nutr Diet 115, 695700.
12. Healthy Options Healthy Meals (2013) Choose Healthy Options Program (CHOPSM) Implementation Guide. Developed by MAZON: A Jewish Response to Hunger and Greater Pittsburgh Community Food Bank. http://mazon.org/our-response/our-initiatives/healthy-options-healthy-meals/ (accessed November 2015).
13. Katz, DL, Njike, VY, Rhee, LQ et al. (2010) Performance characteristics of NuVal and the Overall Nutritional Quality Index (ONQI). Am J Clin Nutr 91, issue 4, 1102S1108S.
14. Hoisington, A, Manore, MM & Raab, C (2011) Nutritional quality of emergency foods. J Am Diet Assoc 111, 573576.
15. Guenther, PM, Casavale, KO, Reedy, J et al. (2013) Update of the Healthy Eating Index: HEI-2010. J Acad Nutr Diet 113, 569580.
16. Guenther, PM, Reedy, J & Krebs-Smith, SM (2008) Development of the Healthy Eating Index-2005. J Am Diet Assoc 108, 18961901.
17. Guenther, PM, Reedy, J, Krebs-Smith, SM et al. (2008) Evaluation of the Healthy Eating Index-2005. J Am Diet Assoc 108, 18541864.
18. Jahns, L, Scheett, AJ, Johnson, LK et al. (2016) Diet quality of items advertised in supermarket sales circulars compared to diets of the US population, as assessed by the Healthy Eating Index-2010. J Acad Nutr Diet 116, 115122.e1.
19. Reedy, J, Krebs-Smith, SM & Bosire, C (2010) Evaluating the food environment: application of the Healthy Eating Index-2005. Am J Prev Med 38, 465471.
20. Krebs-Smith, SM, Reedy, J & Bosire, C (2010) Healthfulness of the US food supply: little improvement despite decades of dietary guidance. Am J Prev Med 38, 472477.
21. Miller, PE, Reedy, J, Kirkpatrick, SI et al. (2015) The United States food supply is not consistent with dietary guidance: evidence from an evaluation using the Healthy Eating Index-2010. J Acad Nutr Diet 115, 95100.
22. Hearst, MO, Harnack, LJ, Bauer, KW et al. (2013) Nutritional quality at eight US fast-food chains: 14-year trends. Am J Prev Med 44, 589594.
23. Montville, JB, Ahuja, JKC, Martin, CL et al. (2013) USDA Food and Nutrient Database for Dietary Studies (FNDDS), 5.0. Procedia Food Sci 2, 99112.
24. US Department of Agriculture, Center for Nutrition Policy and Promotion (n.d.) Healthy Eating Index Support Files 07 08. https://www.cnpp.usda.gov/healthy-eating-index-suport-files-07-08 (accessed March 2018).
25. Campbell, E, Hudson, H, Webb, K et al. (2011) Food preferences of users of the emergency food system. J Hunger Environ Nutr 6, 179187.
26. Caspi, CE, Lenk, K, Pelletier, JE et al. (2017) Food and beverage purchases in corner stores, gas-marts, pharmacies and dollar stores. Public Health Nutr 20, 25872597.