Appendix 1: Survey methods and quality assurance


This was a multi-method study, comprising qualitative and quantitative work. The bulk of the data was gathered by a survey of GPs and practice managers, preceded by a series of semi-structured interviews.

Twenty-six semi-structured interviews and one practice-based focus group were conducted prior to the design of the survey. These were intended to gain a more in-depth picture of how QI is viewed ‘on the ground’ in practices, to gain a better understanding of how practices work day to day, and what might impact, positively or negatively, on the planning or undertaking of QI work. We interviewed a range of stakeholders, including those working in national-level institutions that have an interest in general practice quality, as well as GPs and practice managers from across the UK. The interviews were transcribed and analysed using an inductive, thematic analysis. While we knew some of the issues relating to QI work in practices and formulated our questions accordingly, the interviews also brought up other more local challenges and considerations. The interviews fed into the design of the survey questions.

The survey was divided into four brief sections:

  1. basic information about the respondent and their practice characteristics
  2. who in the practice is involved in quality improvement, what improvement projects have been undertaken and what prompted them to become involved
  3. identifying the facilitators of and barriers to improvement
  4. awareness and use of quality improvement tools and training.

We sent an email invitation to take part in an online survey to all 46,238 GPs on the Royal College of General Practitioners (RCGP) membership list (as at 24 July 2017). We decided early on in the project to use the RCGP membership list as the sample for the GP survey. The RCGP was a partner in the research project and a senior GP involved in QI in general practice at a national level was a member of the research team.

The membership list included GPs who had retired, were working abroad or, for various reasons, were not currently practising. These were identified at the beginning of the questionnaire and GPs who had not practised in the UK in the past 12 months were excluded from the analysis. In order to survey practice managers, we sent an invitation letter to all 9,153 practices in the UK, addressed to the practice manager. Although we are aware that some larger practices may have more than one practice manager, the survey allowed only one response per practice. Both surveys were launched at the end of July 2017 and closed at the end of September 2017. An initial invitation and two reminders were sent during this period.

Overall, 2,377 responses from GPs were included in the dataset. Since we do not know the exact number of ineligible GPs on the RCGP membership list, we estimate the response rate to be between 7% and 10%. We received 1,424 responses from practice managers, which is a 16% response rate. We received a response from the practice manager and at least one GP at the same practice in 368 cases.

Quality assurance and limitations

Ethical approval was obtained from the LSHTM ethics committee, and NHS Research Governance approval was received from the relevant bodies in each of the four countries of the UK. The project was overseen by an Expert Advisory Group, which met periodically during the course of the research. This report was also sent to them for comment.

A key limitation of the study is the low response rate from both GPs and practice managers. The sample frames were not perfect and there were advantages and disadvantages to using the RCGP membership list for the GP survey. Although we realise that the RCGP membership may not be representative of all GPs, around 69% of GPs are currently members. We also believe that the respondents are broadly representative of GPs across the UK (Appendix 2). There were other possible ways to create a sample, but being aware of the high workload of most GPs and the number of surveys they can receive each week, we wanted to increase the chances of a response by sending the survey to named GPs and to be able to use an email invitation containing a link to the survey. We also believed that a survey sent from a professional body would help it stand out from the many others that GPs receive on a regular basis and give it greater credibility. The membership list also included many GPs who were no longer practising and some people who were members of the RCGP because of other positions they held and these were filtered out via a series of questions at the start of the survey.

The invitation letter to take part in the practice manager survey was posted to all practices, including those that may not have had a practice manager. We wanted to take the opportunity to gather the views of all practice managers across the UK. However, we were also aware that there is no central database of practice managers as they are employed by individual practices, hence the use of a mailshot. This raised a further issue that will have lowered the practice manager response rate, which is that, without email addresses for the managers, we were not able to send an electronic link to the online survey. The link was included in the letter and had to be typed into a browser, something that is known to lower response rates. Another factor impacting on the response rate is the motivation of GPs and practice managers, with those more interested in the subject being more likely to respond. This may mean that our respondents were more knowledgeable than the ‘average’ GP or practice manager. However, this would also indicate a lack of representativeness among the respondents, as we may be missing data from those who know less or are less involved than other GPs and practice managers. Another limitation is that we are collecting respondents’ own views about what is happening in their practices and about their own knowledge and behaviours, and self-reports can be subject to recall and other reporting errors. Response and data quality issues are explored further in Erens et al.

The analysis of the survey data was straightforward, with the exception of one question, which asked respondents to use a sliding scale to indicate how much protected time they have each month. The scale was originally set at zero, which means that if it was not moved, we did not know whether this was an indication of zero protected time or if the question was skipped.

** The response rate is based on number of practices rather than practice mangers, as the total number of practice managers is not known.

†† Personal communication with RCGP.

Previous Next