Orthopedic and Spinal implant surgery—the replacement of human bone and soft tissues with man-made metals, plastics, and other materials—is usually thought of as a recent development, a product of modern science, new technologies, and new materials. But that view is only superficially correct.

Advances in science and medicine typically have far earlier beginnings and take much longer and more difficult roads than most people realize. This was certainly true with regard to orthopedic implants: other treatment options were tried first (and found inadequate), and orthopedic surgery itself also had to be perfected, an achievement that did not happen overnight.

Yet the common perception is understandable. As late as 1950, the year L.D. Beard graduated from Whitehaven High School, medical implants were virtually unheard of among the general population, and orthopedic surgery was used mainly to treat severe bone fractures. “Hip surgery” was something surgeons did for broken hips. In contrast, if the problem was not a fracture but a condition associated with old age or disease, the customary treatments were palliative only. Canes and wheelchairs offered improved mobility, and new pain relief medicines replaced addictive opiate-based drugs, but skeletal deformity and joint pain were still widely accepted as unpleasant facts of life.

In particular, debilitating pain resulting from the deterioration of hips and other joints was seen as the natural, irreversible, and uncorrectable consequence of aging. Joints wore out, period. This had been true when the first medical texts were written, and it was still true in the mid-twentieth century. But technology and medicine were rapidly converging and about to cross a major threshold together.

Orthopedics Took Root as Pediatric Deformity Correction Practice

Nicolas Andry de Bois-Regard 1700s / Source: Wikipedia.com

The term orthopedia itself did not even exist until 1741. That was the year Nicholas Andry de Bois-Regard, a professor of medicine at the University of Paris, used it as the title for his book describing “the different methods of preventing and correcting the deformities of children.”[i] He formed the word (in French, orthopédie) by combining the Greek words ortho, meaning straight or free from deformity, and paidios, meaning “child.” This reflected what was then the primary concern of physicians who treated skeletal problems: the correction of childhood deformities.[1]

The causes of such conditions and related pain included not only birth defects and fractures, but osteomyelitis (an infection of the bone or bone marrow) and diseases such as polio and tuberculosis of the bones and joints. But in Andry’s time medical options remained few: each disease or infection would run its usual course—vaccines and antibiotics lay far into the future—and subsequent treatment typically involved the use of corrective devices that were often primitive.

Where physicians could make a difference, however, was in the treatment of bone fractures through careful application of splints and traction devices, but even this approach had its limitations. If a complicated fracture required amputation or other major surgery, that lay beyond the skills and training of ordinary physicians. The reason for this was a division of labor that seems bizarre today: for centuries actual surgery was performed by barber-surgeons, not by physicians.

Ambrose Paré, author of Ten Books of Surgery / Source: Wikipedia.org

In contrast, the practice of medicine was the province of physicians who mostly relied on academic theory—the views of the second century Roman physician Galen still held great sway—rather than empirical science, and the work of the medical profession itself primarily focused on diagnosis and non-intrusive medical practices. This professional divide probably developed in the Middle Ages for arcane religious and social reasons, but it continued on into the early years of the modern era.

Tradition notwithstanding, advances in surgery did eventually find their way not only into hospitals but also into medical treatises, and by the 1500s a French surgeon, Ambrose Paré, could publish Ten Books of Surgery, a massive work that greatly improved the treatment of battlefield wounds and became a standard text for surgeons.[ii]

War had always meant broken bones and amputations of severely damaged limbs, but the instances of each multiplied after the appearance of cannons in the fourteenth century. Horrific wounds became commonplace (as did warfare, seemingly). Army surgeons were slow to develop new skills and techniques to address life-threatening injuries. Paré wanted to change that, and he is now credited with laying “the foundation upon which the modern institution of surgery is built.”[iii]

As European wars came and went, the care and treatment of battlefield injuries continued to improve, as did the status of surgeons. So much so, in fact, that by the mid-eighteenth-century members of the relatively new Academy of Surgery in Paris were even accorded the same privileges as the Faculty of Medicine.[iv] With surgeons no longer regarded as mere technicians, traditional medicine and surgical skill began to overlap, if not actually join.

But progress in the field of orthopedic surgery was, literally, painfully slow. The Napoleonic Wars of the late eighteenth and early nineteenth century, with armies numbering in the tens of thousands, pushed battlefield medicine and surgical skills to their limits. And while the French army did develop a remarkably efficient medical service system for its time, the death rate in military hospitals in the last two years of Napoleon’s reign hovered above 28 percent.[v] It would take another war, fifty years later on another continent, to bring about some of the greatest surgical advances in the treatment of severe bone injuries.

The American Civil War

Orthopedic surgery in the United States had its origins in the American Civil War, a conflict in which advances in weaponry, especially in terms of the damage they could inflict upon the human body, had far outpaced medical skill and technology.

A common misconception is that Civil War surgeons routinely resorted to amputation of arms and legs when confronted with any compound fracture. The records show otherwise. Amputations were certainly common, but the treatments that were attempted were far more varied than this notion allows, and the large number of injuries presented surgeons with almost limitless opportunities to try new procedures or modify old ones in their efforts to save limbs and maximize patient rehabilitation afterwards.[vi]

Complicating every surgery was the risk of infection, which was often fatal.[vii] Although Joseph Lister’s discoveries in antiseptics would not be published until after the Civil War, many medical practitioners had at least some idea of the causes of sepsis, commonly known as blood poisoning, and various antiseptic procedures were tried. Unfortunately, the niceties of sterilization frequently had to be overlooked on the battlefield or, sometimes still, were simply not regarded as essential.[2]

Civil War surgeons in front of their tent / Source: National Museum of Civil War Medicine

Despite this, many army surgeons developed new ideas and techniques throughout the war, and the conflict brought about major improvements in the care of the wounded on both sides. It was also during the war that the American medical profession first began to specialize—doctors skilled in surgery were called “operators”—and this helped weed out inexperienced physicians and surgeons and those whose skills were unsuited to the procedures necessary to save joints and limbs.[viii]

Hospitals were quick to adopt the idea of specialization, especially in the South where the number of maimed soldiers returning home created a desperate need for specialized care.[ix]

The extent of this need is evidenced by an observation recorded by one postwar visitor to a southern town. While attending a public meeting in Aberdeen, Mississippi, he noticed that fully a third of the 300 or so men present had lost either an arm or a leg. Nor was Aberdeen unique. In the first year after the war ended, the State of Mississippi spent one-fifth of its entire revenue on artificial limbs.[x]

Civil War Veteran, Samuel Decker, and his Prosthetics

Because the treatment of wounds varied greatly in the early days of the war, owing to the different experience levels of individual surgeons, each side developed manuals to improve and standardize medical care.[3] Unfortunately, medical journals, publications critical to the dissemination of information on new techniques and procedures, were another matter, and these were especially hard to find on the Confederate side. It was only towards the end of the Civil War that the Confederate States Medical and Surgical Journal was published and widely circulated, and by then it was too late to be a significant factor.[xi]

Even so, the lack of published journals in the South should not be taken as evidence of a lack of quality or knowledge by Confederate medical practitioners. Physicians and surgeons in each army would treat captured wounded soldiers as well as their own, and this often allowed them to see the work that surgeons on the other side had performed and to comment on it. Reports show that Union surgeons were often surprised at the expertise of Confederate surgeons and the degree of difficulties they had overcome in certain procedures. Indeed, it appears that out of this grew a degree of mutual rivalry with each side trying to outdo the accomplishment of enemy surgeons, and the combination of sharing and competition actually enhanced the quality of care for wounded soldiers on both sides.[xii]

Also important to future orthopedic medicine—especially the lengthy surgeries that would be needed for medical implants—was the extensive use of anesthesia during the war. Despite the “biting-the-bullet” image frequently associated with the treatment of the wounded, it was relatively rare for Civil War soldiers to undergo surgery without any anesthesia whatsoever. Both chloroform and ether were available, each having first been used in surgeries during the Mexican War in 1847. Of the two, chloroform became the anesthesia of choice because it was more accessible—a key factor especially in the South—and because it was more easily transportable and presented less of a fire hazard than ether. And the use of anesthesia was generally safe: less than one-half of one percent of all the patients to whom it was administered died as a direct result of the anesthesia itself. The use of opium in follow-up care to suppress pain also allowed more extensive surgical procedures to be conducted.[xiii]

The kinds of fractures surgeons encountered varied greatly, although the majority of the injuries suffered on both sides of the conflict were wounds to extremities, a fact no doubt attributable to the effectiveness of the rifled musket and the recently developed minié ball. A bullet shot from an old smoothbore musket would typically break the bone upon impact, whereas the energy expended by the new weapons and ammunition would splinter, shatter, and split the bones.

Besides the extensiveness of the damage, other factors that influenced the decision as to whether to amputate or not included which extremity was involved and the wound’s location—the nearer the break to the torso, the riskier the surgery. It is probable that the high mortality rate (nearly 100 percent) associated with amputations due to wounds to the upper thigh meant that amputations in such instances were rare.[xiv] As the war continued, surgeons became more conservative in their approach to treating wounds, with amputation becoming the least favored option.

Wounds to the shoulder and hip typically posed the greatest difficulties for surgeons because these injuries required repair of the joints themselves, a very complex and exacting procedure, and also because it was so difficult to keep the repaired joint immobilized. The most effective treatment for shoulder injuries was the surgical removal of part or parts of bone tissue—in medical parlance, “resection arthroplasty.[4] It offered the best chance for retaining a functional limb. On the other hand, hip wounds produced the highest mortality rates, regardless of the method of treatment.[xv]

The conservative, and safest, treatment for hip injuries typically involved various traction devices, such as Buck’s traction, variations of which are still widely used today.

Buck’s Traction / Source: Springer Nature Switzerland AG

Invented by a New York physician Gurdun Buck, his mechanism employed a system of ropes and weights that pulled or stretched the extremity to keep it immobilized during healing. Other devices were Smith’s anterior splint (named for Nathan R. Smith), which suspended the leg from the ceiling, and Hodgen’s splint, which was similar. Both used wire rods that could be bent to conform to the joints. Unfortunately, even with these genuinely helpful devices there was often lasting deformity as well as immobility of the joint, a condition known as ankylosis.[xvi]

In rare instances, excision (the surgical removal of part of the hip) was chosen by surgeons who believed that it offered a better alternative to amputation or traction, but this procedure carried a very high fatality rate. Of the sixty-six total hip excisions documented during the war, only six patients survived. And in none of the cases that involved the acetabulum (the socket part of the hip joint) did the patient survive the surgery.[xvii]

Yet despite the staggering mortality rates associated with Civil War wounds, and surgery that was needed to treat them, the field of orthopedic medicine benefited greatly in terms of new learning and the development of individual expertise. Not that medical research was the reason behind these high-risk measures—surgeons and physicians were desperately trying to save life and limb. But it is undoubtedly true that in no other way could so many surgeons have gained so much experience and understanding of orthopedic injuries, and this carried over to the care of civilian patients in the post-war years. Equally important, innovations developed by American field surgeons, North and South, spread to European medical circles.[xviii]

Nor was the development of new surgical products the war’s only impact on orthopedic medicine. The teaching and training of surgeons improved dramatically, and in 1861 Lewis A. Sayre became the nation’s first professor of orthopedic surgery, at Bellevue Medical College in New York City.[xix]  He became widely known for using casts to treat spinal disorders, a technique which would later influence modern treatments for scoliosis.[xx]

Hospital for Special Surgery Is Founded

The original Hospital for Special Surgery in New York City circa: 1870 / Source: Wikipedia.org

An important institution also came into being during the war years. In 1863, Dr. James Knight founded what is now the Hospital for Special Surgery, the U.S. first hospital to specialize in orthopedics. Knight’s medical experience had come not in the army but through his work with a charitable organization called the New York Association for Improving the Condition of the Poor. His chief focus was “the construction of appliances for the restoration of impaired powers of locomotion in children laboring under deformities both congenital and [those resulting from] infantile paralysis.” It was this concern rather than Knight’s actual accomplishments—unfortunately, most of his “appliances” turned out to be not very effective—that earned him a place in orthopedic history.[xxi]  That, plus the hospital he founded, which remains one of the foremost orthopedic hospitals in the country.

Many surgeons who had gained experience treating bone injuries during the war often went on to direct new orthopedic hospitals that opened after the war ended, and the practice of orthopedic surgery began to be recognized as a true specialty. The American Orthopaedic Association was formed in 1887, becoming the first nationally-organized group of orthopedic surgeons, and two years later the Association began to publish the proceedings of its meetings in Transactions of the American Orthopaedic Association, the original name of today’s Journal of Bone and Joint Surgery.[xxii]  These wartime and post-war advances in the profession of orthopedic medicine, along with increased sanitation and hygiene in the surgical arena and more effective treatment of infections, would continue to shape the practice of orthopedic surgery.

In short, the American Civil War was both a laboratory and a catalyst for future innovations, but the next great advances toward safe and effective hip surgery would not involve battlefield injuries, nor would they take place in the U.S.

Early Hip Surgery

Battle wounds and other trauma, of course, were not the only causes of serious hip problems. This is due to the nature of the hip itself. The hip joint consists of two bones: the femur, or thigh, and the acetabulum, a kind of socket in the hipbone into which the femur extends. The upper end of the femur is wider than the rest of the bone, forming a kind of ball that rotates smoothly—at least in healthy persons—within the joint. In fact, the hip is usually described as a ball-and-socket joint. Over time, however, one or both bones can grind down, develop spurs or other deformities due to disease, or even break, although fractures of the acetabulum are comparatively rare. Because the hip joint must bear the body’s weight, any significant deterioration of either bone can interfere with the joint’s normal movement, causing severe pain that typically only worsens over time. Eventually, even the slightest movement can be excruciating.

Some skeletal problems are present at birth or develop at an early age. In the era before modern anti-bacterial drugs—sulfa would not be introduced until the 1930s and penicillin would not come into wide use until World War II—children and young adults in particular often were victims of acute joint infections that spread to vital organs. This meant that diseases of the joint were not only crippling—they could be fatal.

Among older adults, a common ailment was arthritis of the hip, or osteoarthritis, a condition which for many was manageable but which caused severe pain in others. Aside from opiates, the only relief in some cases was amputation, and as early as the eighteenth-century surgeons in America and Europe were experimenting with this procedure.[xxiii] But even as a last resort, this was not a satisfactory solution, and many surgeons continued to search for alternative treatments.

That search for a safe and more effective treatment would take decades, and the history of that search can have the appearance of a series of biographies—of physicians and surgeons, of inventors and technicians, and also of entrepreneurs and manufacturers. A few of these individuals would gain renowned for their achievements, garnering honors and even knighthoods, but modern orthopedic surgery came about as the end result of many trials and many errors. The errors are forgivable: desperate situations can call for desperate measures, and several pioneers deserve special mention.

One early innovator was Henry Park of The Royal Infirmary in Liverpool, England. He had been appointed surgeon there 1767, and he practiced excision of the joint, a procedure in which part of the bone was removed and the remaining bone tissue was fused, and he became a strong advocate for its use as an alternative to amputation.[xxiv]  But excision presented greater difficulties for surgeons than amputation, and it was never widely accepted. Another English surgeon, Anthony White of the Westminster Hospital in London, has been credited with performing the first excision arthroplasty in 1821.[xxv]  This did reduce pain, but the price was diminished stability.

Occasionally, the problem was not pain caused by movement within the hip, but by the fusion of the hip bones themselves, resulting in a rigid joint. This required an entirely different approach, and in 1825, in Philadelphia, Pennsylvania, John Rhea Barton performed the first osteotomy on a fused hip joint.”[xxvi] Osteotomy involves surgically removing part of the bone and reshaping the remaining bone to create a better alignment. The initial results of Barton’s technique were positive, but over time the joint would become rigid again, and subsequent surgeries produced mixed results. There was also a high mortality rate with the procedure, approximately 50 percent.[xxvii] And repeated surgeries meant a greater likelihood of infection and death.

The next step in treating problems of the hip joint was the surgical placement of material into the joint itself, between the femur and the hip socket, a procedure called interposition. Designed to reduce friction between the bones, the first known use of this technique was in 1860, by Auguste Stanislas Verneuil in Paris, who inserted neighboring soft tissue into the joint, but there was little interest in this procedure until Frenchman Léopold Ollier wrote about his own interposition of adipose tissue (basically body fat) into hip joints in the 1880s. Because he did not affix the tissue directly to the femur, his technique was only a half-step and never completely effective.[xxviii]

Shortly thereafter, In Breslau, Germany, Vitezlav Chlumsky experimented with a variety of interposed materials, including “muscle, celluloid, silver plates, rubber struts, magnesium, zinc, glass, Pyrex, decalcified bones, wax and celluloid.”[xxix] Again, none of these proved satisfactory, but Chlumsky was ahead of his time in another important respect: unlike most of his contemporaries, he would first try each material on animals before using it in humans. His list of materials appears strange today but finding the right material for hip interpositions would remain a major goal of hip surgery a hundred years later.

The transition from simply inserting material into a joint to developing a prosthetic hip implant was led by Themistocles Glück in Germany in 1890. Glück used a ball and socket joint made out of ivory, which was fixed to the bone with nickel-plated screws. He also pioneered using bone cement. But despite these genuine advances, none of his implants was successful, possibly due to infections that had existed prior to surgery.[xxx]

Sir Robert Jones / Source: Wikipedia.org

After the turn of the century, Robert Jones of Britain experimented with a novel technique: covering reconstructed femoral heads with gold foil. The procedure seemed to work well, and the patient “still retained effective motion at the joint” twenty-one years later. Deeply committed to advancing the treatment of fractures, Jones became a prominent figure in British orthopedics: in addition to his surgical accomplishments, he was instrumental in the founding of the British Orthopaedic Society in 1894, and in World War I he served as Inspector of Military Orthopaedics, a position that enabled him to influence the care and treatment of soldiers suffering severe bone injuries.[xxxi]

In the years before and immediately after WWI, three other surgical techniques were tried. One was the use of various natural materials in osteotomies, procedures in which the hip bone is cut, reshaped, or partially removed. The second was known as arthrodesis, the cutting away of part of the joint and then fixing the joint to prevent motion. The third procedure was the removal of the top end of the femur and the repositioning of the remaining femur upward into the hip socket to form a makeshift joint. All were radical, high-risk operations with highly questionable results: each procedure inevitably resulted in a loss of mobility or stability, if not both, and none actually relieved pain, which usually had been the major reason for the procedure in the first place. Even worse, some patients were unable to walk at all afterwards.[xxxii]

Marius Nygaard Smith-Petersen / Source: Cloudfront.net

Much better results were achieved in Boston in the 1920s when Norwegian-born Dr. Marius Smith-Petersen began to use synthetic materials in a variation of the then standard interposition procedure. His approach was to place a cup-like glass mold between the femoral head (the top of the thigh bone) and the acetabulum to prevent the femoral bone itself from moving directly against the hip socket. This, he believed, would promote the growth of a smooth membrane around the glass and help the body heal itself. The idea was not guesswork on his part: Smth-Petersen had noticed the growth of such a membrane in an unrelated surgery.

Glass, however, proved to be too fragile—several molds broke—and he subsequently turned to other materials, including celluloid, Pyrex, and steel. It began to appear that nothing would work satisfactorily. Then in 1937, acting on a suggestion from his dentist, Smith-Petersen tried Vitallium, a trademarked chrome-cobalt alloy that was beginning to be used in dentistry. The results were good if not spectacular—Smith-Petersen eventually achieved a 62.5% success rate—and his prosthesis became a popular option for patients with diseased acetabulums.[5] He implanted some 500 Vitallium molds in all, but his procedure would not make its way to England until after World War II.[xxxiii] By then, other researchers were looking at an even more promising technique.

Unexpected Breakthrough: The First Hip Replacement

Throughout the first decades of the twentieth century, orthopedic surgery still mainly focused on the femoral half of the hip joint, and various materials were tried in prostheses that replaced the femoral head, the top of the thigh that fits into the hip joint. But even if the right material could be found, this technique provided only a partial hip replacement (hemi-arthroplasty), and for many patients this meant only a partial solution.

An unexpected breakthrough came in 1938 when Philip Wiles performed the first total hip replacement at the Middlesex Hospital in London. His femoral prosthesis was stainless steel and was held in place by a side plate secured by screws and bolts. It was a bold idea, but the procedure was not completely successful: of the six young patients who underwent the operation, all of them suffering from rheumatoid arthritis, only one could walk without pain thirteen years later. Still, it was a significant milestone. Whether Wiles might have improved upon these initial results will never be known. Before he could follow up and refine his procedure, World War II intervened and the emphasis in orthopedics abruptly shifted, again, from birth defects and diseases to battlefield wounds.[xxxiv]

During the war, it was mainly American surgeons who took the next steps. Physicians Austin Moore and Harold R. Bohlman, at John Hopkins Hospital in 1943, tried replacing the femoral head with a metal prosthesis fixed in place by a plate screwed to the outside of the femur in much the same manner that Wiles had developed.[xxxv] This side-plate fixation technique soon gave way to a better one in which one end of a metal rod (the stem) was inserted down into hollow core of the femur. This method gained wide acceptance, and in the early 1950s Moore refined his prosthesis by placing perforations in the stem to allow bone in-growth that aided in firmly fixing the device inside the femoral canal.

Working independently of Moore, Dr. Frederick R. Thompson of New York developed a similar prosthesis but with a femoral head made from a chrome-cobalt alloy. Like Moore’s procedure, it dealt with only half of the joint, the femur, but their respective devices were key steps in developing methods and materials that would later be used in total hip arthroplasty.[xxxvi]

In the years immediately following the war, another American, Edward Haboush, did attempt a total hip replacement, using Vitallium and an acrylic cement, but with poor results. The failure of his procedure has usually been attributed to his using the cement to glue the prosthesis to the bone rather than as a grout surrounding the stem to seal it within the bone canal.[xxxvii]

More successful were two French brothers, Robert and Jean Judet, whose efforts attracted considerable interest. In 1946 they implanted a femoral prosthesis made from acrylic, a kind of plastic. Even though the device showed signs of wear almost immediately and was therefore not really suitable as an arthroplasty material, the design itself was adopted by others who would refine it and fabricate prostheses from a chrome-cobalt alloy.

The exchange of information, both through journals and seminars, was critical to further advances, and throughout the 1950s orthopedic research continued to build on the methods and materials pioneered by Smith-Petersen, Moore and Bohlman, the Judet brothers, and, most importantly, Wiles. It fact, it was Wiles’ work in particular that shifted the focus back across the Atlantic to Britain, thanks in part to his textbook Essentials of Orthopaedics, first published in 1949.[xxxviii]  And the next significant step would be taken by a surgeon who had been one of Wiles’ students.

Dr. Kenneth McKee / Source: Joint Implant Surgery and Research Foundation

Prior to World War II, Dr. Kenneth McKee had studied hip arthroplasty and had even made models of hip prostheses, but he had not yet experimented with any of them when, as with Wiles, military service interrupted his research. However, after the war McKee returned to the Norfolk and Norwich Hospital in England and essentially picked up where Wiles had left off.

Unlike other researchers, McKee worked mostly alone, except for collaborations with John Watson-Farrar, an orthopedics colleague.[xxxix]  And while others focused on improving femur-only implants, McKee was convinced that total hip replacement was the answer, especially when the hip socket itself was diseased. This meant making a prosthesis for the acetabulum in addition to one for the femoral head.

Initially using stainless steel for both, McKee attached the hip socket prosthesis with screws and inserted the femoral component (a stem with a ball at the top that fit into the socket) directly into the femur’s shaft, basically following Thompson’s design and using a plate and screws on the outside of the bone.[xl] In 1953 he began using Vinertia, an alloy of chrome and cobalt, but by the end of the decade his failure rate was still high—46 percent. The problem was prosthetic loosening: the prosthesis would not keep a firm fit.

McKee was on the right track, but his methods would soon be supplanted by an even better procedure, one developed by a fellow countryman.


1 In addition to the word “orthopedics,” Andry made another lasting contribution to orthopedic medicine. An engraving on the book’s frontispiece depicted a crooked sapling held upright by a straight stake, an image widely recognized today as a metaphor for the treatment of childhood deformities. A crooked tree is also depicted on the seal of The American Academy of Orthopaedic Surgeons.

2 Even after Lister’s findings became widely known, surgeons far removed from any battlefield were slow to adopt procedures that today are taken for granted. Gauze masks and rubber gloves would not come into use until the late 1880s.

3 Union surgeons were issued the Treatise on Gunshot Wounds by Thomas Longmore, a British surgeon who had served in the Crimean War a few years earlier. Their Confederate counterparts relied on A Manual of Military Surgery for the Use of Surgeons in the Confederate Army, by Julian John Chisolm, professor of surgery at the Medical College in Charleston. Kuz and Bengston: 11.

4 From arthro, meaning joint, and plastia, meaning to mold: in brief, plastic surgery of the joint. Despite its common use today to describe surgery to repair visible signs of injury or aging, plastic surgery includes any surgery that repairs, restores, or replaces any bodily part, whether skin, bone, or organ.

5 Smith-Petersen’s practice was not limited to hip surgery. During World War II, he was one of several physicians consulted by a young John F. Kennedy, then a naval officer suffering from an as yet undiagnosed back problem.


[i] Ignacio V. Ponseti, M.D. “History of Orthopaedic Surgery,” Iowa Orthopaedic Journal 11 (1991): 59, http://www.pubmedcentral.nih/gov/picrender.fegi?artid-23228985&blobtype=pdf
[ii] Ibid: 62
[iii] Charles B. Drucker. “Ambroise Paré and the Birth of the Gentle Art of Surgery.” Yale Journal of Biology and Medicine 81 (2008): 201
[iv] Ponseti: 62
[v] Ibid.
[vi] Julian E. Kuz and Bradley P. Bengston, Orthopaedic Injuries of the Civil War (Kennesaw, Ga.:  Kennesaw Mountain Press, Inc., 1996): 9
[vii] Ibid: 16
[viii] Ibid: 11
[ix] Ibid: 11-12
[x] Walter Lord, “Mississippi: The Past That Has Not Died,” American Heritage Magazine, 17, no. 4 (June 1965)
[xi] Kuz and Bengston: 11
[xii] Ibid: 10-11
[xiii] Ibid: 12-13
[xiv] Ibid
[xv] Ibid: 25, 44
[xvi] Ibid: 42, 44, 52
[xvii] Ibid: 42-44
[xviii] Bick: 298
[xix] Jay M. Zampini, M.D., and Henry H. Sherk, M.D. “Lewis A. Sayre: The First Professor of Orthopaedic Surgery in America.” Clinical Orthopaedics and Related Research 466, no. 9 (September 2008): 2263 ; Whitman, 408; Bick: 300
[xx] Zampini: 2264
[xxi] Whitman: 409-410
[xxii] Klenermann: 6; American Orthopaedic Association website, http://www.aoassn.org/AboutEvolution.asp.  Kuz and Bengston: 62-63
[xxiii] Pablo E. Gomez and Jose A. Morcuende. “Early Attempts at Hip Arthroplasty—1700s to 1950s” The Iowa Orthopaedic Journal, 25. (2005): 25
[xxiv] Lord Cohen of Birkenhead. “Liverpool’s Contributions to Medicine,” British Medical Journal, 1 (10 April 1965): 946
[xxv] Gomez and Morcuende: 25
[xxvi] Ibid: 25-26
[xxvii] Ibid: 26
[xxviii] Ibid
[xxix] Ibid
[xxx] Ibid 26; N. J. Eynon-Lewis, D. Ferry, and M. F. Pearse. “Themistocles Gluck: an Unrecognized Genius.” British Medical Journal 305 (19-26 December, 1992), 1535
[xxxi] Gomez and Morcuende: 27; Klenerman: 6-7
[xxxii] Gomez and Morcuende: 26.  Julie Anderson, Francis Neary, and John V. Pickstone, Surgeons, Manufacturers and Patients:  A Transatlantic History of Total Hip Replacement, (New York:  Palgrave MacMillan, 2007): 21
[xxxiii] Gomez and Morcuende: 27; Anderson: 21-22
[xxxiv] Anderson: 20; Gomez and Morcuende: 28
[xxxv] Anderson: 20
[xxxvi] Gomez and Morcuende: 28; Anderson: 20-21
[xxxvii] Anderson: 22
[xxxviii] Gomez and Morcuende: 28; Anderson: 22
[xxxix] Anderson: 23
[xl] Ibid: 24.

 

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.