Ƶ

Are Patient Feedback Surveys Asking the Right Questions?

— It may be time to re-evaluate the questionnaires

Ƶ MedicalToday
A photo of a man sitting at a computer and choosing a star rating
  • author['full_name']

    Fred Pelzman is an associate professor of medicine at Weill Cornell, and has been a practicing internist for nearly 30 years. He is medical director of Weill Cornell Internal Medicine Associates.

So, how are we doing?

At a recent meeting, we had a report out from one of the groups that reviews the feedback from patients across the spectrum of care we provide at our practices. Select patients receive these questionnaires from our institution's practices after they visit, either through the mail or online, where they are asked to give feedback about many aspects of our practice, the care they received, and their overall experience.

It addresses questions such as: How are we doing answering the phones? Did your provider communicate the plan to you? Were you informed of wait times? How was the overall quality of care? Was there appropriate follow-up?

We have all gotten these surveys -- a ubiquitous part of the modern world -- after we visit a website, a restaurant, or almost any interaction with a business these days.

"Before you leave our website, would you be willing to complete a brief questionnaire to help us improve the service we provided you today?"

"Thank you for calling our Customer Service Hotline. Would you be willing to rate the service agent who assisted you today?"

For years, the information we've received through our feedback questionnaire has been delivered to the practice leadership as a marker of how things are going. We also get individual comments on select patient's experiences, including whether they had positive or negative experiences at our practice, with our providers, and with all of the members of our teams.

Yes, we all love getting those nice positive comments like, "Dr. Pelzman is the best!"

And even when we get bad ones, they are certainly opportunities to think about how we're doing things and to consider ways to improve.

But at a certain point during this particular report out meeting, I was struck by the fact that one of our questions received far fewer responses. Compared to all the other questions, less than half the number of patients answered this one particular question.

When one question doesn't get answered by that many people, it first suggests that the data we've gotten is probably not very reliable, and second, perhaps that question isn't really getting to the point we're trying to answer.

If a far lower percentage of respondents answers a particular question, maybe the question is worded poorly? Maybe we're asking the wrong thing? Maybe we should ask patients why they're not answering it?

I'm quite sure that the people who created these patient feedback questionnaires are satisfied that they have created perfect questions, that they are positive the things they are asking are what patients want to tell us and what we need answers to.

Sure, if patients tell us the number one problem they have with our practice is no one answers the phone, then we have a responsibility to respond and fix it, no matter what it takes. If patients feel our communication and follow-up is lacking, then we need to pour resources into making sure this happens the right way, at the right time for every patient after their visits with us.

But if there's one question no one is answering, maybe we shouldn't be asking it?

Or maybe we should be asking them after they don't answer it why they didn't?

Maybe we're not getting at what we think we're getting at.

With all the data we are able to collect across the spectrum of our care -- including how many patients we are seeing, how we are billing, how compliant we are with all the boxes we need to check, how good are we at ensuring everyone is completing their healthcare maintenance items, all the quality reports and data generated -- we need to make sure of two things. First, that we get this to the providers and the rest of the team in the right way, in a format that's useful to them. But also, we need to ask if there's something we're not looking at the right way.

Just as restaurants look at online reviews with a knowing eye -- understanding that the people who post are either those who had a great experience or a terrible one -- we in healthcare need to look at this information and decide if this is going to help us change what we do to better take care of our patients, or if we really just need to rethink how we are asking the questions.

To do this, we need to first find out whether this question is not being answered just at our institution, or across all institutions where these questionnaires are delivered. And, once we have that information, we should be able to go back to the people who wrote these questions and say, can you rethink this question or can we find a better way to ask it to get at what we're really trying to get answered?

Because if no one is answering the question, then it's probably not worth asking.

Providers are overwhelmed with dashboards, report cards, feedback, and daily, weekly, and monthly emails about how we are doing on so many measures. It's in our best interest to make sure this isn't just more noise, that it's quality information that is helpful, actionable, and reliable.

At the end of the meeting to review the findings from our questionnaire, I suggested we go through this process, find out if the response rate on this particular question was low at other institutions, and then approach the parent organization administering these questionnaires to consider if their questions needed tweaking.

No one wanted to really take responsibility for looking into this, but I'm hopeful we can keep this going, that we can improve the questionnaire so that we can improve our practice.

And the question is ...