Study by Krannert prof Susan Lu shows cardiac surgeons’ online ratings prove reliable
Friday, September 14, 2018
Looking for a high-caliber surgeon may be easier than previously thought.
Critics of online rating platforms have stated that online physicians’ ratings are nothing more than a popularity contest, based on the interpersonal encounter a patient has with their doctor, rather than on the doctor’s attentiveness to current best practices. Even though the system has been met with skepticism, a new study suggests that the time-honored word-of-mouth method for finding quality health care may hold up after all.
Susan Lu, associate professor of management at the Krannert School of Management, and her co-author Huaxia Rui from the Simon Business School at the University of Rochester, noticed the lack of systematic work devoted to examining the accuracy of patients’ reviews. So, she opted to study RateMD.com, the original doctor ratings site with over 2 million reviews, and its assessments of cardiac surgeons in Florida.
The study, titled “Can We Trust Online Physician Ratings? Evidence from Cardiac Surgeons in Florida,” has been published in Management Science.
One of the first startling facts Lu encountered was that surgeons with higher ratings also had higher mortality rates. But there’s a reason for that.
“In general, severe patients are less likely to seek their treatment from those doctors with low ratings,” she says. “Family members or patients with life-threatening diseases are more likely to do their homework to understand the disease, so they have more knowledge to observe the outcomes and contribute the right evaluations.”
Once the issue of sicker patients selecting higher-rated doctors is accounted for, Lu says, online ratings began to better reflect mortality rates. This suggests that reviewers are not, as previous studies have stated, unaware of good criteria for evaluation — such as the physician’s level of knowledge and helpfulness.
One of the challenges, however, that the system faces is the scarcity of reviews. “The online physician ratings system is still in its infancy,” Lu says. “Large amounts of physicians are still unrated.”
She says the reasons for the underrepresentation of many doctors in online reviews is mainly unexplored but a lot of that has to do with the anonymity factor.
“If we had the chance to know who contributed reviews, then we may be able to identify what type of people are more likely to write a review, and why they were motivated to write it. From there, we could provide useful suggestions,” she says.
But motivating groups to write reviews, specifically by reward, has its drawbacks.
“You can over-bonus,” she says. “This may incentivize some fake reviews. That’s not something you want to encourage, and that’s why I hesitate to suggest that kind of thing.”
But perhaps, she says, Uber is a good model to follow.
“They ask you to leave a review before you order the next ride. If doctors could ask people to review them with detailed information to allow that provider’s services, to schedule doctor visits. This way no one who hasn’t used their services can provide a review, and reviewers can’t provide multiple reviews for the same service,” she says.
Lu plans to explore reasons for the scarcity of reviews and ways to increase the number of reviews and further increase their reliability so that, ultimately, patients can more easily access quality care.
By Maura Oprisko