How safe are hospitals? We still don't know

  • Niall Hunter, Editor

20/2/2014

How safe are our hospitals?

This is a fairly simple question the public has been seeking answers to for some time, especially in view of the patient safety scandals on recent years.

However, while it may be a simple question, the answers to it to date have been somewhat more complex and cloudy.

We are frequently assured, often in the wake of the latest major adverse incident in the provision of healthcare, that our health services are safe and of high quality nearly all of the time.

All very well and good, and undoubtedly true in most cases.

Unfortunately, the people giving us these assurances are usually those providing and running the services.

However well intentioned these assurances are, people using the service will automatically query why they should settle for assurances from healthcare providers, given the inherent potential for bias.

This somewhat paternalistic approach to safety assurance also increases the risk of complacency when it comes to minimising the type of errors of omission, commission and downright negligence that have been at the heart of many scandals.

Assurances that staff are 'learning lessons' from adverse events are of little comfort to the victims of such events and to those who might be using the services of a hospital where these incidents occurred.

How can the public be sure that quality and safety is being upheld if we have no statistical proof of this?

We have therefore been waiting for many years for independently verified and publicly available statistics on hospitals' clinical results.

Traditionally, we have had no independent verification of individual hospitals' clinical results and how hospitals compare to each other in the safety stakes.

Until now, that is.

Behold a new report: 'Healthcare Quality Indicators in the Irish Health Sytem', produced by the Department of Health.

The purpose of the report was to 'propose and examine a number of key quality indicators derived from existing (mostly confidential to date) Hospital Inpatient Enquiry (HIPE) data in order to assess there feasibility of monitoring quality of care and measuring health service performance'.

In other words, the Chief Medical Officer's office, which compiled the report, was trying to get a handle on quality and safety standards by measuring post-admission death rates in acute hospitals around the country for certain conditions. In doing so, it aimed, insofar as it was possible, to detect clinical results which were outside accepted norms and act upon any issues involved in the provision of care in order to prevent further patient harm.

The report, in its introduction, says: "monitoring the performance of health services in meeting their objectives is important in supporting and strengthening accountability and in improving the quality and safety of the services."

"Information, including performance indicators, can be used to monitor and evaluate the effectiveness and safety of services, to identify areas of performance which may require further exploration and action, and to inform decisions about the planning, design and delivery of services."

The findings of the report, however, do not live up the the hype (if you'll pardon the obscure pun) of the introduction.

While the report is welcome as a first step on what is going to be a long road towards proper accountability and transparency on quality and safety in hospital care, it actually doesn't tell us very much.

It doesn't even name individual hospitals in publishing mortality rates, although the Department has promised that hospitals will be named in future reports of this type.

The report's conclusion however, makes one wonder was it worth the (considerable) effort in reading it and interpreting its figures.

It states: "The findings set out in this report should not be taken as making any inferences concerning quality of care in hospitals and certainly should not be interpreted as rating hospitals with respect to the selected indicators."

We are provided with tables giving us the mortality rates within 30 days of hospital admission for patients in the years 2008-2010.

The crude mortality stats published, which do not take into account the significant factor of the age of patients being admitted to a particular hospital, provide us with apparently worrying statistics such as the death rate following admission with a heart attack was 10 times greater in the lowest performing centre than it was in the highest performing hospital.

Also, there was a fourfold difference in the death rates following admission with ischaemic stroke between the highest and lowest performing hospitals in this treatment category.

When taking the age of patients into account many hospitals recorded mortality rates above the national average rate.

The problem with the statistics revealed in the report is that we simply do not know how much we should be worried about them.

This is because the report admits there are a number of logistical and technical flaws with the statistics as presented.

It was discovered that the figures came with a considerable health warning. In some cases, for example, hospitals had incorrectly coded the principal diagnosis, thereby potentially skewing the figures produced. There was also a lack of consistency in the documentation of some hospital records.

Also, the figures do not link the death directly to the principal diagnosis or procedure being undertaken. In other words, the patient may have been admitted for stroke care or for a hip fracture operation, but we cannot be sure if all the deaths were directly related to these factors in every case.

In addition, the mortality rates reported did not take into account factors such as other conditions the patients may have had in addition to those they were admitted with, or medication use which may have affected their treatment outcomes

Mortality rates outlined in the report also do not take into account factors occurring before or after treatment in hospital, including how easy it was for the patient to access the hospital. Also other factors such as social deprivation levels among patients were not taken into account.

However, it is hoped that these factors may be taken into account in future audits of mortality rates and clinical care undertaken by the Department.

Frustratingly, while the report states that the mortality rate variations between hospitals may be down to a number of factors including quality/safety of care, data issues and other confounding factors, it does not examine to what extent the variation might have been explained by quality/safety factors.

Surely higher than normal death rates in some hospitals, as outlined in the report, cannot solely be put down to technical factors related to how the data was collected.

Hopefully, this type of issue will be dealt with properly in future audits.

This will help give service users a more accurate picture of hospital safety and provide the health authorities with  'red flags' to take action on potential patient harm issues when they emerge.

Major variations in hospital death rates

 

 

 

 

 

 

 

 


Discussions on this topic are now closed.