Drug policy evaluation — topic overview

Drug policy evaluation

What is drug policy evaluation and why is it important?

Evaluation is essential for effective policymaking, helping ensure that policies and programmes have the desired effect, provide value for money and do not have negative unintended consequences. The importance of evaluation has been recognised in all EU drug strategies and in the strategies of many Member States.

To support those considering or involved in commissioning, managing or undertaking policy evaluations, this page provides access to a range of materials, including a 7-step guide, examples of strategies and evaluations in Europe and potentially useful data sources.

EU evaluations

Timeline of EU drug strategies and plans and their evaluation

In the timeline below, linked documents are colour-coded according to whether they are action plans/strategies or evaluations:

  • action plan or strategy
  • evaluation
1990

The call for a common approach to the drugs phenomenon in Europe was first made by the European Parliament in the mid 1980s. In response, heads of state and government of the 12 Member States of the European Community agreed on the first European plan in the field of drugs 1990.

1992

Two years later, the plan was revised and a new European plan in the field of drugs 1992 was adopted by the European Committee to Combat Drugs — CELAD, a newly created intergovernmental mechanism among Member States.

1995

However, in 1995 on the basis of the new prerogatives given by the Treaty on the European Union, the European Commission took the lead in drafting and adopting a more comprehensive European action plan to combat drugs 1995–99. This represented an important step towards the development of a European approach on drugs.

1999

Four years later, upon a Communication of the European Commission, the European Council endorsed the EU drugs strategy 2000–04, and in June 2000 adopted a new EU action plan on drugs 2000–04.

2002

The action plan called upon the European Commission to undertake a mid-term evaluation in 2002 and …

2004

… a final evaluation in 2004. This was the first time that such an evaluation exercise had been undertaken in the drugs field at EU level. During 2004, talks, meetings and conferences were organised by successive EU presidencies to give continuity to the European approach on drugs. At its December 2004 meeting, the European Council adopted the EU drugs strategy 2005–12, covering an 8-year period.

2005

Two consecutive 4-year action plans were subsequently adopted, the first of which was the EU drugs action plan 2005–08.

2008

The European Commission was tasked to draw up annual progress reviews on the implementation of the action plans, for consideration by the Council, and to conduct the final evaluation of the EU drugs action plan (2005–08)

2009

On the basis of this evaluation, the new action plan 2009–12 was drafted.

2012

Action 72 of the EU drugs action plan 2009–12 would request the European Commission to undertake an external, independent assessment of the implementation of the EU drugs strategy 2005–12 and its action plans.

2013

On the basis of the external evaluation, on 7 December 2012, the Justice and Home Affairs Council of the European Union endorsed a new EU drugs strategy (2013–20) and, on 6 June 2013, a new EU action plan (2013–16).

2016

In 2016, the Commission conducted a mid-term assessment of the strategy, which looked at the outputs of the strategy and their impact.

2017

Drug strategy evaluation at the national level

This section provides a summary of the evaluation of national drug strategies in the EMCDDA’s 30 reporting countries (EU 28, plus Turkey and Norway) up to the end of 2016. More detailed information can be obtained in the EMCDDA Paper National drug strategies in Europe’.

Governments use national drug strategies and action plans to elaborate their approach to illicit drug policy. The strategies generally give an outline of the overall principles and course of action being followed and implemented through programmes and projects. The trend towards the use of these documents has been developing since the mid-1990s. At that time a third of the EMCDDA’s current reporting countries had one and by the turn of the century two-thirds had adopted one. At present, all of the 30 countries have an active strategy document.

Alongside this trend, the evaluation of national drug strategy documents has been gaining momentum, since the first evaluations were published in 2003, and had become a standard practice among the EMCDDA reporting countries by 2010.

Evaluation helps governments in many ways, for example to track implementation progress, gauge a strategy’s continuing relevance, measure inputs and outputs and assess possible impacts. The outcomes from evaluations can be used to make adjustments in active strategies and to develop new ones.

There are many different types of evaluation, and what is most appropriate will depend on factors such as timing, the sort of information required (research questions) and the resources available (for more information see: Evaluating drug policy: a seven-step guide to support the commissioning and managing of evaluations).

The EMCDDA monitors evaluation practices though a typology focused primarily on assessments conducted within the framework of national governments’ drug strategy documents (see Table 1). This incorporates both whole-strategy and issue-focused evaluation, alongside ongoing monitoring and research aimed at supporting evaluation.

Table 1: Categories used for describing national evaluations
Evaluation type Description
Multi-criteria evaluation A multi-criteria evaluation of a strategy and/or action plan at its mid- or end point
Implementation progress review A review of the actions taken and/or the strategy’s context at its mid- or end point
Issue-specific evaluation An evaluation or audit of a specific policy or strategy aspect or area
Other approaches Assessment by means of ongoing indicator monitoring, research projects, or regional or local strategy evaluation

Map of Europe showing policy evaluations by country as reported to the EMCDDAFigure 1 — Click to view a large version of this map There is often no neat divide between the types of evaluation, and countries may have conducted more than one sort over time. In some countries (e.g. France), evaluations of different projects and responses have long been undertaken and have functioned as assessments of measures outlined in strategies and action plans. Consequently, the map below shows a snapshot of the situation reported by EMCDDA countries up to the end of 2016.

A final drug strategy evaluation typically takes the form of either a multi-criteria evaluation or an implementation progress review at the end of the strategy’s timeframe. In 2016, there were 10 multi-criteria evaluations, 10 implementation progress reviews, and 4 issue-specific evaluations reported as having recently taken place, while 6 EMCDDA reporting countries used other approaches, like a mix of indicator assessment and research projects.

Examples of evaluations, where publically available, provide details about the method used and findings. These reports and overviews can be found in the EMCDDA Drugs Library (coming soon) while additional information can be found in the Country Drug Reports. These documents do not, however, represent all evaluations undertaken or the approach followed over time in any country. They are a selection linked to the map for explanatory purposes.

National evaluations

You can access publicly available national evaluation documents below.

National policy evaluation documents found in EMCDDA Drugs Library
Document name Country Publication date
An evaluation of the Government’s Drug Strategy 2010 (United Kingdom) United Kingdom July 2017
Rapid Expert Review of the National Drugs Strategy 2009-2016 (Ireland) Ireland May 2017
Final evaluation of the Spanish National Drugs Strategy 2009-2016 Spain March 2017
Final evaluation of the Spanish Action Plan on Drugs 2013-16 Spain March 2017
National Drug Policy Coordination Group Finland Evaluation of the Government Resolution on the Action Plan to Reduce Drug Use and Related Harm 2012–2015 FInland December 2016
Intermediate evaluation of the Action Plan on Drugs 2013-2016 (Spain) Spain December 2016
Summary report on the implementation of the Action Plan for the implementation of the National Drug Policy Strategy for the period 2013 to 2015 (Czech Republic) Czech Republic February 2016
Public Health Agency of Sweden (2015) Follow-up of the national ANDT strategy (2011-14) Sweden December 2015
Evaluation of drug consumption rooms (Denmark) Denmark May 2015
Evaluation of the government's strategy for alcohol, drug, doping and tobacco policy (Sweden) Sweden April 2015
Mid-term implementation review of Latvia's National Programme for the Control of Narcotic and Psychotropic Substances and the Prevalence 2011-2017 Latvia January 2015
Implementation of the Italian National Action Plan on Drugs objectives in the Regions Italy December 2014
Mid-term evaluation of the National Programme for Counteracting Drug Addiction (2011-16) (Poland) Poland December 2014
Evaluation of the Governmental Strategy and Action Plan 2010-2014 of Luxembourg regarding the fight against drugs and addictions Luxembourg September 2014
Evaluation: Project anonymous outpatient drug abuse treatment (Denmark) Denmark July 2013
Evaluation of the National Anti-Drug Strategy 2005 – 2012 (Romania) Romania March 2013
External Evaluation of the National Plan Against Drugs and Drug Addiction 2005-2012 (PNCDT) (Portugal) Portugal January 2013
Final Report on the National Drug Prevention Strategy 2012 (Estonia) Estonia December 2012
Norwegian Escalation plan for the substance field 2007-12 - results and instruments Norway December 2012
Performance audit - Tackling problem drug use in Malta Malta October 2012
Evaluation of Cyprus National Drug Strategy 2009-2012 Cyprus September 2012
Evaluation of the National Drug Strategy of the Republic of Croatia (2006-2012) Croatia December 2011
Prospects of the Slovenian National Programme on Illicit Drugs 2010–2014 – The NGO Sector's View and Proposals Slovenia July 2010
Tackling problem drug use (United Kingdom) United Kingdom March 2010
Evaluation of the National Drug Strategy (2000-2009) (Hungary) Hungary December 2009
Evaluation of Dutch drug policy Netherlands January 2009
Evaluation of the implementation of Slovenia's National Drug Strategy 2004-2009 Slovenia August 2008
Evaluation of France's three-year plan on drug control and prevention of dependencies (1999-2002) France September 2003

The EMCDDA operates a robust takedown policy — if there is a document which you believe should be removed from this list, please feel free to contact us and let us know. Similarly, while we are actively updating this list, we welcome input and feedback from visitors should there be a publicly available document or resource which you believe should be included here. We may be contacted at policyevaluation.team[a]emcdda.europa.eu — remember to replace the [a] with '@' before sending your email.

Resources

Resources for drug policy evaluation

We will be adding relevant resources and other useful material to this section over time.

Key EMCDDA resources

Glossary

Glossary of drug policy evaluation terms

Below is a list of terms used when discussiong drug policy evaluation.

Activities — processes, tools, events, technology and actions that are part of the programme implementation. ese interventions are used to bring about the intended programme changes or results, i.e. the actions taken or work performed to achieve the aims of the intervention.

Added value — the extent to which something happens as a result of an intervention or programme that would not have occurred in the absence of that intervention. Also known as ‘additionality’.

Aim — the purpose of, for example, an intervention or a policy.

Causality — an association between two characteristics that can be demonstrated to be due to cause and e ect, i.e. a change in one causes the change in the other.

Coherence — the extent to which intervention logic is non-contradictory or the extent to which the intervention does not contradict other interventions with similar objectives.

Control group — a group of participants in a study not receiving a particular intervention, used as a comparator to evaluate the e ects of the intervention.

Criterion — character, property or consequence of a public intervention on the basis of which a judgement will be formulated.

Data — information; facts that can be collected and analysed in order to gain knowledge or make decisions.

Drug action plan — scheme or programme for detailed speci c actions. It may accompany or be integrated into a drug strategy but typically focuses on a relatively short period
and identi es more detailed actions to implement the strategy, along with timings and responsible parties.

Drug policy — overall philosophy on the matter; position of the government, values
and principles; attitude, direction. It encompasses the whole system of laws, regulatory measures, courses of action and funding priorities concerning (illicit) drugs put into e ect by governments.

Drug strategy — unifying theme; framework for determination, coherence and direction. It is generally a document, usually time bound, containing objectives and priorities alongside broad actions, and may identify, at a top level, the parties responsible for implementing them.

Effectiveness — the fact that expected e ects have been obtained and that objectives have been achieved.

Effciency — the extent to which the desired e ects are achieved at a reasonable cost. Equity — the extent to which di erent e ects (both positive and negative) are distributed fairly between different groups and/or geographical areas.

Evaluation — a periodic assessment of a programme or project’s relevance, performance, e ciency and impact in relation to overall aims and stated objectives. It is a systematic tool which provides a rigorous evidence base to inform decision-making.

Evaluation criteria — aspects of the intervention which will be subject to evaluation. Criteria should t the evaluation question. If all the criteria are put together, they should account for a good and complete measurement. Examples are relevance, e ciency and e ectiveness.

Evaluation question — question asked by the steering group in the terms of reference and which the evaluation team will have to answer.

Evaluation team — the people who perform the evaluation. An evaluation team selects and interprets secondary data, collects primary data, carries out analyses and produces the evaluation report. An evaluation team may be internal or external.

Evidence-based — conscientiously using current best evidence in making decisions. Evidence-informed policy — an approach to policy decisions that aims to ensure that decision-making is well informed by the best available research evidence.

Ex ante evaluation — an evaluation that is performed before implementation of an intervention. is form of evaluation helps to ensure that an intervention is as relevant and coherent as possible. Its conclusions are meant to be integrated when decisions are made. It provides the relevant authorities with a prior assessment of whether or not issues have been diagnosed correctly, whether or not the strategy and objectives proposed are relevant, whether or not there is incoherence between them or in relation to other related policies and guidelines, and whether or not the expected impacts are realistic.

Ex nunc (or interim) evaluation — an evaluation that is performed during implementation. Ex post (or nal) evaluation — evaluation of an intervention after it has been completed. It strives to understand the factors of success or failure.

External evaluation — evaluation of a public intervention by people not belonging to the administration responsible for its implementation.

Feasibility — the extent to which valid, reliable and consistent data are available for collection.

Impact — fundamental intended or unintended change and direct or indirect consequences occurring in organisations, communities or systems as a result of programme activities within 7 to 10 years, i.e. long-term consequences of the intervention.

Impact (or outcome) evaluation — evaluates whether the observed changes in outcomes (or impacts) can be attributed to a particular policy or intervention, i.e. determining whether or not a causal relationship exists between an intervention or policy and changes in the outcomes.

Indicator — quantitative or qualitative factor or variable that provides a simple and reliable means to measure achievement, to help assess the performance of a policy/intervention (to re ect the changes connected to an intervention, an output accomplished, an e ect obtained or a context variable — economic, social or environmental).

Input — nancial, human, material, organisational and regulatory means mobilised for the implementation of an intervention.

Internal evaluation — evaluation of a public intervention by an evaluation team belonging to the administration responsible for the programme.

Joint evaluation — evaluation of a public intervention by an evaluation team composed of both internal (people belonging to the administration responsible for the programme) and external evaluators.

Maryland Scientific Methods Scale — a system that provides an overview of evaluation designs.

Method — complete plan of an evaluation team’s work. A method is an ad hoc procedure, specially constructed in a given context to answer one or more evaluative questions. Some evaluation methods are of low technical complexity, while others include the use of several tools.

Monitoring — a continuing function that uses systematic collection of data on speci ed indicators to provide management and the main stakeholders of an ongoing intervention with indications of the extent of progress, achievement of objectives and progress in the use of allocated funds.

Need — problem or di culty a ecting concerned groups, which the public intervention aims to solve or overcome.

Norm — level that the intervention has to reach to be judged successful, in terms of a given criterion. For example, the cost per job created was satisfactory compared with a national norm based on a sample of comparable interventions.

Outcomes — the likely or achieved short- and medium-term e ects of an intervention’s outputs, relating to the aim of the intervention. Speci c changes in programme participants’ behaviour, knowledge, skills, status and level of functioning.

Outputs — direct products of programme activities which may include types, levels and targets of services to be delivered by the programme.

Process evaluation — one that focuses on programme implementation and operation. A process evaluation could address programme operation and performance.

Programme logic model — picture of how a policy/intervention works — the theory and assumptions underlying the programme. A programme logic model links outcomes (both short- and long-term) with programme activities/processes and the theoretical assumptions/principles of the programme.

Public managers — public (sometimes private) organisations responsible for implementing an intervention.

Random assignment — making a comparison group as similar as possible to the intervention group, to rule out external in uences; randomly allocating individuals to either the intervention group or the control group.

Randomised controlled trial (RCT) — an experiment in which two or more interventions, possibly including a control intervention or no intervention, are compared by being randomly allocated to participants.

Relevance — the extent to which an intervention’s objectives are pertinent to the needs, problems and issues to be addressed.

Scope — precise de nition of the evaluation object, i.e. what is being evaluated.

Stakeholders — individuals, groups or organisations with an interest in the evaluated intervention or in the evaluation itself, particularly authorities that decided on and nanced the intervention, managers, operators and spokespersons of the public concerned.

Steering group — the committee or group of stakeholders responsible for guiding the evaluation team.

Sustainability — the continuation of bene ts from an intervention after major development assistance has been completed; the probability of continued long-term bene ts.

Terms of reference — the terms of reference de ne the work and the schedule that must be carried out by the evaluation team. ey recall the regulatory framework and specify the scope of an evaluation. ey state the main motives for an evaluation and the questions asked. ey sum up available knowledge and outline an evaluation method. ey describe the distribution of the work and responsibilities among the people participating in an evaluation process. ey x the schedule and, if possible, the budget. ey specify the quali cations required of candidate teams as well as the criteria to be used to select an evaluation team.

Tool — standardised procedure used to ful l a function of evaluation (e.g. regression analysis or questionnaire survey). Evaluation tools serve to collect quantitative or qualitative data, synthesise judgement criteria, explain objectives, estimate impacts, and so on.

Validity — the extent to which the indicator accurately measures what is purports to measure.

Value for money — a value for money evaluation is a judgement as to whether the outcomes achieved are su cient given the level of resources used to achieve them. It generally includes an assessment of the cost of running the programme, its e ciency (the outputs it achieves for its inputs) and its e ectiveness (the extent to which it has achieved expected outcomes) and uses analytical approaches such as cost-e ectiveness or cost– bene t analyses.

Loading