Skip to main content Accessibility help


  • Access
  • Cited by 20


      • Send article to Kindle

        To send this article to your Kindle, first ensure is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about sending to your Kindle. Find out more about sending to your Kindle.

        Note you can select to send to either the or variations. ‘’ emails are free but can only be sent to your device when it is connected to wi-fi. ‘’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

        Find out more about the Kindle Personal Document Service.

        OASIS: an assessment tool of epidemiological surveillance systems in animal health and food safety
        Available formats

        Send article to Dropbox

        To send this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Dropbox.

        OASIS: an assessment tool of epidemiological surveillance systems in animal health and food safety
        Available formats

        Send article to Google Drive

        To send this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Google Drive.

        OASIS: an assessment tool of epidemiological surveillance systems in animal health and food safety
        Available formats
Export citation


The purpose of this study was to develop a standardized tool for the assessment of surveillance systems on zoonoses and animal diseases. We reviewed three existing methods and combined them to develop a semi-quantitative assessment tool associating their strengths and providing a standardized way to display multilevel results. We developed a set of 78 assessment criteria divided into ten sections, representing the functional parts of a surveillance system. Each criterion was given a score according to the prescription of a scoring guide. Three graphical assessment outputs were generated using a specific combination of the scores. Output 1 is a general overview through a series of pie charts synthesizing the scores of each section. Output 2 is a histogram representing the quality of eight critical control points. Output 3 is a radar chart representing the level reached by ten system attributes. This tool was applied on five surveillance networks.


Epidemiological surveillance is a key activity for public veterinary services and other public and private organizations in animal health and food safety that require high-quality information in order to take appropriate decisions and implement activities for prevention and control of zoonoses and animal diseases [1].

The quality of health information relies on the quality of the surveillance systems from which it was obtained. It therefore is crucial to assess surveillance systems in order to estimate the usefulness and the correct application of the generated data [2]. The assessment of surveillance systems is consequently a component of both the risk analysis procedures agreed upon at international level [3] and the veterinary services assessment procedures implemented by the World Organization for Animal Health [4].

Several methods have been developed in recent years to assess surveillance networks. Some assess the operation of key activities of surveillance networks on a purely qualitative basis, like the surveillance network assessment tool used in the Caribbean [5], or in a semi-quantitative way, like the method used for countries in Africa [6]. Some other methods specifically assess critical control points (CCPs) of surveillance networks in a semi-quantitative way in order to produce recommendations for improvement [7, 8]. Finally, other methods developed for public health surveillance systems by the Centers for Disease Control (CDC), World Health Organization (WHO) and other health organizations are based on the assessment of surveillance system attributes such as sensitivity, timeliness or acceptability [912].

All these methods represent a panel of valuable and complementary tools (various questionnaires), objectives (comparison of systems, identification of weak points, improvement of surveillance), results (surveillance systems attributes, CCPs, operation points), and results display. Here we aimed to combine these methods to develop a complete and standardized assessment tool that could be used on a common basis for a wide range of surveillance systems in animal health and food safety and be able to produce multilevel results. The development of such a tool would be useful for standardizing the results produced and consequently enable better comparison of surveillance systems. It would also be of great help for evaluators and managers of surveillance systems for the implementation and follow-up of assessments. Our first objective was to develop a tool enabling assessment of a surveillance system with the aim of proposing recommendations for its improvement. At this stage, the assessment is only technical and no cost-benefit analysis process has been included.

We assembled a team of ten epidemiologists, developers and users of assessment methods and surveillance system managers in order to develop and apply this new tool known as OASIS (acronym for the French translation of ‘analysis tool for surveillance systems’).


Three types of current assessment methods were used as a basis for the development of OASIS: (i) the Surveillance Network Assessment Tool (SNAT) developed in the Caribbean [5, 13], (ii) the CCP assessment method developed by Dufour [7] and (iii) the guidelines for evaluation of surveillance systems developed by the CDC and WHO [9, 11].

Surveillance network assessment tool

The SNAT is the result of a combined study started in 2005 and undertaken by a group of veterinary epidemiologists. This tool was specifically designed to assess national surveillance systems [5, 13]. It has been used extensively in the Caribbean region within the regional animal health network CaribVET. More than 15 countries have been assessed, and some of the national results can be accessed on the website of the regional network ( The SNAT consists of two logical phases. The first phase draws up a detailed inventory of the structures and procedures of the epidemiological surveillance network for animal diseases. The second phase presents a summary of the situation of the network for its principal fields of activity, through a summary table. The surveillance system is described using a questionnaire organized into ten sections which constitute a typical surveillance protocol: (i) objectives and scope of surveillance; (ii) central institutional organization; (iii) field institutional organization; (iv) diagnostic laboratory; (v) formalization of surveillance procedures; (vi) data management; (vii) coordination and supervision of the network; (viii) training; (ix) restitution and diffusion of information; (x) evaluation and performance indicators.

The summary part of each section of the questionnaire is always presented in the form of four criteria that may or may not be met by the network being studied. If a criterion is met, the corresponding box is ticked. Levels of compliance are indicated by corresponding pie charts and the output of this method is represented in a series of ten pie charts (one for each section).

The SNAT was later adapted for the assessment of national bee mortality surveillance systems in Europe [14].

CCP assessment method

This assessment method is based on the identification of the critical points in the operation of an epidemiological surveillance system [7]. The critical points were identified using the HACCP (Hazard Analysis and Critical Control Point) method. This method, initially developed for food hygiene, was transposed to epidemiological surveillance. A list of ‘hazards’ was drawn up corresponding to possible biases resulting from poor network operation, thereby identifying the critical points enabling control of these hazards.

The evaluation grid is made up of a list of criteria necessary for assessing each critical point.

Considering that surveillance systems operate differently for existing endemic diseases and exotic diseases (sampling strategy, importance of communication, etc.), two different evaluation grids were developed.

A score is attributed to the control of each critical point in order to obtain a total network score of 100 points. The score achieved as a result of evaluation first enables the measurement of the possible margin for improvement for each critical point, and then a comparison of the operational quality of different networks.

A questionnaire and a scoring guide are provided to collect all necessary information and to help users complete the assessment grid. The graphical output is a histogram, each bar representing the level reached by a CCP.

As a result of the assessment, the critical points with the poorest scores are identified and proposals for improvement are made.

This method has been used to assess several surveillance systems in France [15] and in Africa [8].

Assessment of surveillance system attributes

Since their first publication in 1988 [16], the guidelines [9, 10] of CDC in the USA have been regularly updated in order to provide standards for public health services to assess surveillance systems. These guidelines take into account the great variety of surveillance systems and are intended to be suitable for all these systems.

The evaluation method focuses on assessing how well the system operates in achieving its purpose and objectives.

This method recommends that the assessment include ten system attributes: (i) simplicity, (ii) flexibility, (iii) data quality, (iv) acceptability, (v) sensitivity, (vi) positive predictive value, (vii) representativeness, (viii) timeliness, (ix) stability and (x) usefulness. It should be noted that the importance attached to the evaluation of each attribute is system-dependent, in keeping with the purpose of the surveillance. Depending on the version of the guidelines or frameworks, other attributes are included, e.g. portability or system cost [10].

The evaluation process involves several successive tasks including the involvement of the stakeholders in the evaluation, a detailed description of the surveillance system (objectives, activities, resources), gathering information regarding system performance (description and estimation of each system attribute), stating conclusions and making recommendations. A checklist for the evaluation process and relevant standards for task details are also provided. No specific graphical output is attached to this method.

Other organizations such as WHO [11, 1719] and the Canadian Public Health Organization [12] base their assessment on a similar list of attributes.

In all these methods, the assessment of system attributes is strictly qualitative.

Work process

A team of ten researchers from the French Agency for Food, Environmental and Occupational Health Safety (ANSES) was formed. These researchers were epidemiologists, developers and users of assessment methods, or managers of surveillance systems in animal health and food safety.

The method used for the study included the content and outputs of the three assessment methods mentioned above in order to assess their complementarities, to consider the possibility of combining their processes, tools and outputs and finally to produce a complete and standardized methodology.

The resulting method was applied to five different French surveillance systems [foot-and-mouth disease (FMD), rabies in bats, poultry disease network, antimicrobial resistance in pathogenic bacteria from animal origin, laboratory network for Salmonella detection in the food chain].


OASIS methodology

The analysis of the existing evaluation methods showed that they nearly all follow the same process: setting up of an assessment team, onsite evaluation for the collection of all relevant data for the description of the structure and operation of the surveillance system, use of a questionnaire or a checklist to collect these data, analysis of data and statement of conclusions and recommendations.

All topics addressed for the description of the surveillance systems are very similar, supporting the statement that most surveillance systems operate according to the same standards whatever their scope (animal health or public health), even if their components vary considerably. We therefore considered the possibility of developing an information collection questionnaire including most of the useful information, attempting to concentrate more on functions that have to be integrated in the operation of a surveillance system (strategic decision taking, data management) than on specific structure which might differ greatly from one system to another (steering committee, database, geographical information system). A balance had to be found between too detailed questions unable to address all possible components of a surveillance system and too general questions leading to imprecise answers. Further experiences on the questionnaire's application will help to refine this questionnaire and find the appropriate balance.

The three evaluation methods differ significantly in the way information is compiled and treated. One method is purely qualitative (CDC method), another is based on standardized qualitative assessment criteria (SNAT) and the third can be considered as semi-quantitative, with the scoring of assessment criteria (CCP). These differences led the authors of each method to use three different graphical outputs to present the results of the assessment as previously detailed. We considered them to be complementary and thus chose to retain all three (with some modifications) as the outputs for the OASIS method. To achieve this, it was necessary to find an appropriate way to link the collection of information from the system to each output. We decided to produce them using a semi-quantitative method. We therefore developed a set of criteria to be scored, thus providing a semi-quantitative assessment of all the activities and structures of a surveillance system. Once this scoring is completed, each output, generated using a specific combination of the scores of the assessment criteria, can be automatically calculated.

The OASIS tool

A list of 78 assessment criteria describing the situation and operation of a surveillance system was produced (Table 1). These assessment criteria were divided into ten sections according to the structure and activities of a surveillance system. Each criterion was scored on a scale from 0 to 3 according to the level of compliance of the system under examination. Criteria were rated ‘not applicable’ if not relevant to the surveillance system considered, this criterion was then not considered in the synthesis. Scoring was done according to a guide detailing, for each individual score, the situation in which that score should be awarded. An example of a scoring guide for one criterion is given in Table 2.

Table 1. List of assessment criteria for scoring in the OASIS method

Table 2. Example of the scoring guide: scoring benchmark for assessment criteria 5.10 ‘quality of collected samples’

A questionnaire of 42 pages was developed to support the collection of useful information to be used for the scoring of the assessment criteria.

Output 1 is based on the SNAT method, some sections of which were modified. One section was integrated into another (‘supervision’ into ‘central institutional organization’) and one section was split into two sections (‘formalization of surveillance procedures’ into ‘surveillance tools’ and ‘surveillance procedures’). The number of sections has thus been maintained at ten: (i) objectives and scope of surveillance; (ii) central institutional organization; (iii) field institutional organization; (iv) diagnostic laboratory; (v) surveillance tools; (vi) surveillance procedures; (vii) data management; (viii) training; (ix) restitution and diffusion of information; (x) evaluation and performance. These sections are used as the basis for the distribution of the 78 assessment criteria. Each section is summarized by a pie chart representing the result of the scores obtained by all criteria of the section (Fig. 1). The contribution of the assessment criteria to the section result is not weighted. Output 1 is considered as a general view of the structure and operation of the surveillance system. The series of pie charts enables the weak parts of the system to be identified easily.

Fig. 1. Output 1 for the French antimicrobial resistance surveillance network assessment.

Output 2 is based on the CCP assessment method. We determined the CCPs to which each assessment criterion contributes. The scores of the appropriate assessment criteria were then integrated into the initial scoring grid of the CCP method, enabling an automated calculation of the control point once the assessment criteria are scored. Considering the various levels of contribution of the assessment criteria to the control points, weightings were introduced into the calculation. As in the CCP method, the graphical result of this output remains a histogram (Fig. 2). Output 2 can therefore specifically identify the level of control of the CCPs of the surveillance system. This output is thus particularly useful for proposing relevant improvements to the operation of the surveillance system.

Fig. 2. Output 2 for the French antimicrobial resistance surveillance network assessment (critical control points).

Output 3 is based on the surveillance system attributes developed by the CDC and WHO. Ten system attributes were retained to represent the quality of a surveillance system: (i) sensitivity; (ii) specificity; (iii) representativeness; (iv) timeliness; (v) flexibility; (vi) reliability; (vii) stability; (viii) acceptability; (ix) simplicity; (x) usefulness. We determined the attribute to which each assessment criterion contributes. The numerical result of each attribute is the result of the combination of the score of each assessment criterion. Considering the various levels of contribution of the assessment criteria to the attributes, weightings were introduced into the calculation. The results of the attribute assessments are placed in a radar chart enabling the strengths and weaknesses of the surveillance system to be visualized clearly (Fig. 3). The radar chart has been chosen to easily differentiate this output from the two others.

Fig. 3. Output 3 (system attributes) for the French antimicrobial resistance surveillance network assessment.

The size of the questionnaire (42 pages), the large number of assessment criteria to be scored (78 criteria) and the various combinations and weightings applied to generate the graphical outputs required the development of a spreadsheet file to integrate the scores of criteria and automatically process the calculation used to produce the outputs. This spreadsheet enables a comment to be included for each score and also at the end of each section. This last comment can be a complementary explanation of the score chosen or a recommendation for the improvement of the score. In order to facilitate the use and improvement of OASIS, all necessary resources are freely available on the website ( and placed under the Creative Commons licence (

The development of the scoring guide, the combination of assessment criteria and attribution of weightings is inevitably subjective. We attempted to reduce this subjectivity by applying a consensus process within the working group. Nevertheless, this consensus does not mean that the best options have always been chosen and further application of the tool might lead to further refinement of the list of criteria, weights applied and scoring guide.


From the five surveillance systems to which the method was applied, we chose the surveillance network of antimicrobial resistance in pathogenic bacteria from animal origin (known as RESAPATH) as an example for this publication.

French public or private veterinary diagnostic laboratories participating in RESAPATH on a voluntary basis (59 members in 2009) send the results of the antibiograms they performed for field veterinarians to the surveillance network via electronic or paper forms. The network is coordinated by two ANSES laboratories, Lyon and Ploufragan–Plouzané. A steering committee comprising all partners of the network meets annually.

Antibiogram data include information on the samples and the context in which they were performed (laboratory performing the analysis, species, age of the animal, observed pathology, type of sample, location, etc.) as well as antibiotics tested and the diameters of inhibition zones measured.

Antibiogram techniques are recommended by the network and annual information and training sessions for the laboratories are organized to standardize collected data. According to the antibiogram results, some interesting bacterial strains are collected by ANSES in order to perform specific studies on resistance mechanism and to contribute to the veterinary reference frame.

The assessment method was applied by two members of the working group involved in the coordination of RESAPATH. Completing the questionnaire and scoring the assessment criteria took 2 days.

Output 1 of the assessment (Fig. 1) displays a global, good operation of the surveillance network, except for the surveillance procedures (section 6). The other main areas capable of improvement are the central and field institutional organization, training, and evaluation. Output 2 (Fig. 2) shows that CCP ‘sampling’ is not sufficiently controlled while confirming the overall quality of the other points. Output 3 (Fig. 3) highlights mainly a lack of representativeness and an improvement margin for sensitivity, specificity and flexibility, while the other attributes appear to be correct.


Use of the tool

The application process (assessment of five different systems) showed that the method was easy to use for the evaluators, bearing in mind that they had taken part in its development. The very high level of detail of the scoring guide appears to be of great help to the assessment teams and enables unambiguous scoring of the assessment criteria. Nevertheless, some scoring criteria can be difficult to assess in the event of a lack of available data on the operation of the surveillance system. The example given in Table 2 illustrates this for surveillance systems unable to produce data on the quality of samples collected in the field. In these cases we arbitrarily decided to use the worst-case scenario by applying the worst score when data was absent.

Nearly all the application processes needed about 2 days for completion. It must be taken into account that these assessments were performed to validate the applicability of the tool. A complete classical evaluation process as described for most of the methods would require more time to ensure an appropriate involvement of the various participants in the surveillance and onsite verification when the assessors are not the individuals involved in the day-to-day management of the surveillance. Such a complete evaluation process needs to be further developed, especially in order to guide the users of the tool in the interpretation of its different outputs.

The scoring of each criterion and the use of the scoring guide clearly help to highlight the improvement margin and to formulate specific recommendations.

All these practical considerations validate the applicability and ease of use of a unique list of criteria to produce the various graphical outputs of the system. Nevertheless this decision needs to be analysed in relation to each output and the initial process that produced it. The relevance of the use of three different outputs also needs to be discussed.


Considering Output 1, the original concept of SNAT used four criteria to summarize each section and each criterion could only be considered as ‘satisfied’ or ‘not satisfied’. This process was considered as very reductive and some sections clearly needed more criteria to be correctly summarized. It was also difficult to decide in the original SNAT whether a criterion was satisfied or not, because it was sometimes only partly satisfied. These problems are solved in OASIS with various numbers of criteria used to summarize a section (from 4 to 14 according to the section) and a scoring scale from 0 to 3 with a clear definition of each score. The reorganization of the section contents allows more emphasis to be placed on the description of surveillance procedures and some parts of the surveillance description to be completed that appeared weak in the original method.

Considering Output 2, we stated that all useful information needed to complete the scoring grid was already contained in the list of assessment criteria. The new scoring guide therefore incorporates all the items from the original CCP scoring guide and the work mainly consisted of attributing the appropriate assessment criterion to each item of the original scoring grid in order to produce the results of the CCP method automatically, once the assessment criteria have been scored. A decision had to be taken regarding which items of the scoring grid should be attributed to each assessment criterion. This decision was reached through consensus throughout the group, but this does not mean that the optimum configuration was reached. Although the application performed supports the current status of the tool, further use of OASIS might lead to proposals for improvements on this point.

Regarding complementarity between outputs 1 and 2, besides the difference of the graphical layout, some sections of the two outputs appear to be comparable, e.g. ‘objectives’ and ‘information diffusion’ in Output 2 that are analogous to ‘objective and scope of surveillance’ and ‘communication’ in Output 1. The application example given for antimicrobial resistance surveillance shows that they give comparable results. Nevertheless, other sections differ significantly. Output 1 describes all aspects of a surveillance system (e.g. field organization, laboratory, training) while Output 2 specifically targets CCPs. Therefore, even if some aspects of these two outputs were clearly related, we chose to maintain both because each one illustrates different views of the same reality.

With regard to Output 3, we also stated that the useful information needed to estimate the level of compliance with the internationally recognized system attributes developed by the CDC and WHO were already contained in the list of assessment criteria. OASIS is a first attempt to quantify these system attributes on the basis of the structure and operation of the surveillance system. The added value of the working group has thus been to attempt to link the list of assessment criteria with the list of quality criteria. This work highlighted the fact that the system attributes are clearly not independent (one assessment criterion often contributes to several system attributes). For example ‘the implementation of regular refresher training courses for agents in the field’ contributes to both the sensitivity and flexibility of the system. Weightings had to be applied to represent the appropriate contribution of the respective assessment criteria. Weightings and criteria distribution were reached through consensus and, as for Output 2, further use of OASIS might lead to proposals for improvements on the decisions taken.

Output 3 appears to be clearly complementary to the two other outputs and gives a marked result for the interpretation of the quality of surveillance systems. While Output 3 is directly useful for understanding the quality of a system (e.g. a lack of sensitivity clearly highlights a problem), Output 2 shows which CCPs could explain this situation and what margins there are for improvement, whereas Output 1 indicates what part and structure of the system needs to be targeted to modify this situation.

It was intentionally decided not to mention any value (percentage or number of points) for Output 3. Even if the chart was the result of a percentage (weighted score of all the assessment criteria for the considered system attributes on the maximum possible score), mentioning a rate could lead to a misuse of the number given. For example, a rate of 24% for sensitivity could be interpreted as if the real sensitivity of the surveillance system had been estimated using quantitative methods, which is not the case. OASIS is a semi-quantitative tool that should be used to draw the general shape of the performance of a surveillance system. Its results should therefore not be over-interpreted.

Although it is recommended that all outputs be provided in the form of bar charts for a better visualization of the results, three different graphical layouts have been chosen to easily differentiate the three outputs in order to reduce the risk of confusion between them and to clearly reinforce the statement that they represent different aspects of the surveillance system. Nevertheless, the use of the tool enables the production of different graphical layouts.

No cost or cost-benefit analysis is proposed at this stage. Further development of the tool could, as a first stage, provide a system to quantify the cost of the improvements proposed in order to make it possible to simulate the cost-benefit of any improvement to be implemented.

The five teams who applied the tool considered that the outputs correctly described their system and that no specific surprise in these results was observed. The teams acknowledged that implementation of the tool forced the coordination team of the surveillance system to address all activities of the system, which represents a valuable step in the improvement process.


The OASIS method is an attempt to ease the work of surveillance systems evaluators by providing a questionnaire and a complete scoring process of 78 assessment criteria leading to the production of three complementary assessment outputs. The OASIS package comprises a questionnaire, a list of assessment criteria, a scoring guide and a spreadsheet for scoring integration and the production of outputs. The complete assessment process for the implementation and interpretation of the outputs of the tool still needs to be developed.

This method was applied to five surveillance systems and is considered to be easy to use and also probably usable for a large range of surveillance systems. Nevertheless, the choice of criteria and the combination of these criteria to produce the various outputs could be further refined by using OASIS on additional surveillance networks.

ANSES therefore plans to use OASIS to assess the existing surveillance systems on animal health and food safety in France.

So far, OASIS has been used on these two types of surveillance systems. With the help of some adaptation, it could conceivably also be applied to plant health or environmental health surveillance systems.


We thank the managers of the surveillance systems who contributed to the application of the assessment method and specifically Jean-Yves Madec, Marisa Haenni and Eric Jouy for their contribution to the assessment of the RESAPATH network.




1.Dufour, B, Hendrikx, P, Toma, B. The design and establishment of epidemiological surveillance systems for high-risk diseases in developed countries. Revue Scientifique et Technique 2006; 25: 187198.
2.Salman, MD, Stark, KD, Zepeda, C. Quality assurance applied to animal disease surveillance systems. Revue Scientifique et Technique 2003; 22: 689696.
3.World Organization for Animal Health.Import risk analysis. In Terrestrial Animal Health Code. Paris: OIE, 2010, 6 pp.
4.World Organization for Animal Health. OIE Tool for the evaluation of performance of veterinary services (OIE PVS tool). Paris, OIE, 2010.
5.Lefrançois, T, et al. CaribVET: Animal Disease Surveillance Network in the Caribbean. International Meeting on Emerging Diseases and Surveillance, Vienna, Austria, 2009, p. 127.
6.Squarzoni, C, et al. Epidemiological surveillance networks in 13 West African countries of the PACE: situation and evaluation of their operation in 2004. Epidémiologie et Santé Animale 2005; 6980.
7.Dufour, B. Technical and economic evaluation method for use in improving infectious animal disease surveillance networks. Veterinary Research 1999; 30: 2737.
8.Dufour, B, et al. Evaluation of the epidemiological surveillance network in Chad. Epidémiologie et Santé Animale 1998; 133140.
9.Centers for Disease Control.Updated guidelines for evaluating public health surveillance systems: recommendations from the guidelines working group. Morbidity and Mortality Weekly Report 2001; 50: 151.
10.Centers for Disease Control.Framework for evaluating public health surveillance systems for early detection of outbreaks. Morbidity and Mortality Weekly Report 2004; 53: 111.
11.World Health Organisation. Protocol for the evaluation of epidemiological surveillance systems. Geneva: WHO, 1997.
12.Health Canada. Framework and tools for evaluating health surveillance systems. Ottawa: Health Canada; 2004.
13.Lefrançois, T, et al. The caribbean animal health network (CaribVET) harmonisation and reinforcement of animal disease surveillance focused on emerging diseases. In: Camus, E, Dalibard, C, Martinez, D, Renard, J-F, Roger, F, eds. 12th International Conference of the Association of Institutions for Tropical Veterinary Medicine. Does Control of Animal Infectious Risks Offer a New International Perspective? Montpellier, France, 2007, pp. 421425.
14.European Food Safety Agency. Bee mortality and bee surveillance in Europe. Parma: EFSA, 2009. Report No.: CFP/EFSA/AMU/2008/02.
15.Moutou, F, Dufour, B, Savey, M. Evaluation of the French foot-and-mouth disease epidemiovigilance network. Epidémiologie et Santé Animale 1997;
16.Centers for Disease Control.Guidelines for evaluating surveillance systems. Morbidity and Mortality Weekly Report 1988; 37: 118.
17.Declich, S, Carter, AO. Public health surveillance: historical origins, methods and evaluation. Bulletin of the World Health Organization 1994; 72: 285304.
18.World Health Organisation. Protocol for the assessment of national communicable disease surveillance and response systems. Guidelines for assessment teams. Geneva: WHO, 2001.
19.World Health Organisation.Overview of the WHO framework for monitoring and evaluating surveillance and response systems for communicable diseases. Weekly Epidemiological Record 2004; 79: 322325.