Kerin W. Kiser / Morguefile.com

Finding the beautiful evidence in orthopedic literature today is sort of like standing on a corner in midtown Manhattan and trying to hear the flapping of butterfly wings. It requires training and an exceptional attention to detail. When is even a randomized controlled trial not good enough? How can a failure to understand statistics impact your practice? When should you question a surgical procedure shown on YouTube?

Dr. Edward Akelman, Chairman of the AAOS (American Academy of Orthopaedic Surgeons) Council on Education, has a view from the top that emanates from his work “on the ground.” As Director of the Warren Alpert Medical School of Brown University Hand Surgery Fellowship, he interacts daily with young surgeons who are struggling to make their way through the cacophony of materials available coming at them from all directions.

He says, “This is the YouTube generation. So many papers are available online, in addition to videos that demonstrate how procedures are done. The most challenging thing is to ensure that young surgeons understand that everything out there is not peer reviewed or evidence based.”

The overarching goal, says Dr. Akelman, is that when young surgeons read an article that they see the process behind the words…that they understand what steps the researcher went through to arrive at his or her conclusions. “Journal clubs remain an appropriate venue for teaching these skills because they invite a back and forth discussion. One usually begins by asking, ‘What is the hypothesis of the article?’ Then you examine the methodology section to determine if the study was set up appropriately such that it will answer the hypothesis/study question. Let’s say a study’s goal is to determine the outcomes of the surgical treatment of distal radius fractures. If you read the methodology section and see that there were 60 patients in the study, but that some had elbow, as opposed to distal radius fractures, then you know that the exclusion criteria were not appropriate. This is a huge red flag as it results in bias. If the two groups are not equivalent in terms of how the study was set up then the conclusions you draw may be off the mark.”

Dr. Akelman, who says that many of the best literature mentors are those who have served on editorial boards, adds, “You also want to see that the discussion includes sufficient referencing of previous work in the same area so that younger surgeons have some perspective on what was done in the past and how effective it was. Fundamentally, much of this evaluation process is the same as it was 30 years ago. The difference today is with regard to online texts and video. Even though a technique is on YouTube that is not some automatic stamp of approval; you must verify that there is peer reviewed literature to support it.”


Clara Natoli/Morguefile.com
Just as one should never make grand assumptions in the research arena, one should not jump to conclusions about people. In his role with AAOS, Dr. Akelman tries to put the brakes on some runaway assumptions about the younger generation of orthopedists. “I often encounter people who think that the new generation of orthopedists are just hurriedly perusing the literature and not thinking through whether the technique or technology involved actually works. I think these younger surgeons are more sophisticated than that, however, and it is only the way they approach things that differs. They are increasingly pressed for time, and have more and more to learn…all the more reason to not waste time with literature that leads them down the wrong path.”

Dr. Michael Schafer, Chairman and Professor of the department of orthopedic surgery at the Northwestern University Feinberg School of Medicine, has trained nearly 300 individuals in the art of reading the literature. He states, “My primary method of teaching this skill to residents is via our journal club. The purpose is not to have residents memorize the article, but to have them critically evaluate the methodology and discussion sections. The goal is that someone will be able to read an article, analyze it, and make the right decision as far as whether to incorporate the contents of the piece into their practice.”

Putting a magnifying glass to his process, Dr. Schafer explains, “First, I ask the residents or fellows to describe the article and select the most relevant points. I have seen that some people—in an effort to save time—are tempted to jump to the discussion section without looking at the methodology. After I slow them down, I ask them to discuss whether it is a randomized controlled study (RCT), if it is blinded, if it is retrospective, etc. I then go over the importance of levels of evidence.”

Aside from relying on respected publications and authors, Dr. Schafer recommends that readers have a strong understanding of the methodology section. “Someone may miss the fact that a study was not an RCT and assume that the research reflects a significant clinical breakthrough. Also, in evaluating the statistics, one should examine how many subjects there were and/or how many animals were used. That gives you some idea as to the strength of the study.”

He continues,

After they look at the methodology section I have the students review the discussion section and see what has been presented in the past on this topic. Is it new and relevant? Does it expand on prior work or just validate what was already in existence? Finally, will the study result in a clinically relevant change in the way they approach patients? Questions are the fundamental tools of the researcher—as well as the reviewer.

As the Associate Editor of the Journal of Bone and Joint Surgery (JBJS), Dr. Robert Bucholz, has the inside track on what constitutes stellar literature. Dr. Bucholz, also a former President of AAOS, states, “Peer reviewed literature is the central feature of continuing education, hence surgeons must know how to read it critically. Such literature has changed dramatically in the last 20 years. Whereas the mainstay used to be retrospective or case series, we see few of those now. Currently, there are three types of peer-reviewed studies that are widely used. The first is the RCT; the authors of these studies tend to follow the CONSORT (Consolidated Standards of Reporting Trials) guidelines that outline acceptable ways to design and conduct the trials. Then there are systematic reviews, which attempt to synthesize the data from several high level studies. Finally, there are epidemiological studies based on databases that arise from sources such as Medicare, hospitals, etc. Even though the studies are often flawed because they don’t contain all of the information you want, the sheer numbers give these studies a lot of weight.”

The RCT, given a lot of credence in the world of research, must be explored with care. Dr. Bucholz: “Those reading a RCT must understand all of the potential biases, such as attrition and selection bias. For example, with regard to the latter, you can end up selecting patients that aren’t representative of the entire population, meaning that your conclusions cannot be generalized.”

“Systematic reviews, which can include retrospective case studies, can be helpful if they are well structured and unbiased. The problem with them, however, is that at the end of the day you may not learn anything new. Let’s say you have 10 articles on a medical condition containing a total of 1200 patients. But if 1000 patients come from just one study the question is, ‘What is the benefit of having 200 extra patients?’”

Dr. Bucholz, Chair of the University of Texas Southwestern Medical School, sheds light on his own approach to teaching the literature. “We begin by having the junior residents review the manuscript at hand, giving their opinions on the strengths and weaknesses. Then the senior residents come forward with an historical perspective and discuss how this specific study fits into the overall literature. The most difficult thing to teach is the statistics.”

“Generally speaking, orthopedists aren’t well trained in statistics…and the statistics involved in peer reviewed literature are complex. JBJS has actually hired three deputy editors for statistics and methodology to serve as screeners. The most talented and prolific authors are those who have degrees in epidemiology, have taken specific courses, or have studied under people who are extremely good in methodology.”

“Unfortunately, most residents and fellows aren’t learning statistics in a formalized way.”

At the end of the day, if a research question isn’t helping patients live healthier lives, then it is best left in the mind of the researcher. Dr. Bucholz states,

When training people on the literature I strongly emphasize the difference between results that are statistically significant and those that are clinical important.

“Let’s say you are reviewing a study on minimally invasive (MI) total hip surgery. The results of the gait analysis show that while at three months there is no difference between a conventional and a minimally invasive approach to the hip, at six weeks there was a statistically significant difference in hip flexion contracture in the MI group. Those inexperienced in the literature realm would say, ‘Oh, the p-value is less than .05 so that means they are more likely to have a faster rehabilitation.’ The fact, however, is that this statistical difference may be of no clinical significance in terms of their ability to go about their daily lives.”

Fortunately, when it comes to spotting the beautiful evidence, the butterfly, amidst today’s crowded stream of orthopedic resources, the best technique is one used by orthopedic surgeons every day…a methodical approach.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.