July 3, 2016 Ray Morgan

Self-immunization with Snake Venom

Few topics in venomous herpetology generate debate as contentious as that around self-immunization. The subject is so divisive and the opposing opinions hurled with such ferocity that it’s the only topic I specifically called out as having “worn out its welcome” in the posting guidelines for The Venom Interviews group on Facebook. (There’s an exception for peer-reviewed research published in credible journals, but I’m not sure that exception has ever been used.) This rule arose as a practical necessity in response to the certainty with which self-immunization discussions descend into loud, angry bar fights that monopolize the group for days at a time. I suppose it’s ironic to have written an article that’s off-limits for discussion in my own group.

I don’t expect this article will change the mind of anyone already invested in an opinion about self-immunization. But since there are a lot of people who are just hearing about it for the first time and are unsure what to believe amidst all the noise, I thought it might be helpful to try to examine the subject objectively, with as little prejudice as possible.

Here are the topics I’ll try to cover:

  • What is self-immunization?
  • Why is the debate so nasty?
  • Does it work?
  • Is there any application for it?
  • Has it produced new discoveries?

What is self-immunization?

In the context of this article, “self-immunization” (“SI” for short) is the practice of injecting snake venom in an attempt to cause one’s body to produce a titer of antibodies sufficient to at least partially mitigate the effects of envenomation by the chosen species.

Some of those who practice SI do so outside the public eye for practical reasons. Others see themselves as scientific pioneers, blazing new trails for science in the tradition of medical self-experimenters like Walter Reed,  Albert HofmannStubbins FfirthAugust BierMarie CurieBarry Marshall, Elizabeth Parrish, and, of course, Bill Haast. There’s also a small subset of practitioners for whom self-immunization is a public spectacle.

Medical self-experimentation has a fascinating and colorful history. Its track record is mixed, producing both important advancements and catastrophic failures, and it has always been contentious. The flaws in evidence collected by self-experimentation are nicely summarized in Wikipedia’s article on the subject:

“Self-experimentation has value in rapidly obtaining the first results. In some cases, such as with Forssmann’s experiments done in defiance of official permission, results may be obtained that would never otherwise have come to light. However, self-experiment lacks the statistical validity of a larger experiment. It is not possible to generalise from an experiment on a single person. For instance, a single successful blood transfusion does not indicate, as we now know from the work of Karl Landsteiner, that all such transfusions between any two random people will also be successful. Likewise, a single failure does not absolutely prove that a procedure is worthless. Psychological issues such as confirmation bias and the placebo effect are unavoidable in a single-person self-experiment where it is not possible to put scientific controls in place.”

Self-immunization differs from most other instances of medical self-experimentation in that it is not performed by medical professionals. At present, SI is performed, apparently exclusively, by people without formal education in medicine or immunology, and this is evident in some fundamental flaws in their approach — the absence of things like baseline measurements, controls, double-blind trials, etc. The seriousness of these flaws seems to be underestimated or ignored by practitioners, and there appears to be little clarity around how hypotheses are formed and tested, how data are collected and interpreted, and how conclusions are drawn. By any measure, it’s a stretch to characterize current SI practices as “citizen science.”

Why is the debate so… venomous?

Aside from the issues related to SI directly, the nature of the debate itself is fascinating. While many scientists and most herpers seem to have shallow reservoirs of diplomacy, SI is a uniquely potent catalyst for dooming virtually any discussion to vitriolic ad hominem attacks, straw man arguments, and general mayhem.

What is it about this particular topic that makes it seemingly impossible to discuss rationally? After years of observing people argue over SI, it’s often possible to see the triggers that send the discussion off the rails. Opponents of the practice mock its proponents the moment they display some egregious misunderstanding of the science they believe they’re doing. Proponents often invite this ridicule with credulous, uncritical acceptance of half-baked hypotheses until they are disproved — the exact opposite of evidence-based skepticism. Proponents respond with anecdotes, and they deride the opponents as purists, elitists and “haters” (for those still using tween vocabulary), who are impeding progress and stifling discoveries with their silly, uncompromising insistence on rigor.

Each side is openly suspicious the other’s motives. Opponents dismiss the proponents’ claims of “doing science” as a disingenuous cover for desperate, reckless bids to feed their egos with the amazement of admirers who don’t know any better. They are accused of trying to emulate Bill Haast, who had a medical necessity to protect himself 70 years ago, while that medical requirement isn’t the same today.

Meanwhile, proponents reflexively reject these criticisms, claiming that they are nothing but petty jealousy, that the naysayers are secretly bitter than they cannot exhibit such impressive feats of immunity. Skepticism is interpreted as attacks against the practitioner personally or against a personal hero (i.e., Haast). Inevitably, the argument deteriorates into explicit challenges to the opponents’ bravery, masculinity, or general badassery, and all hope for rational dialogue is lost. (Prediction: Responses to this article will follow the same trajectory.)

While the personalities involved and the scientific potential should be two distinct issues, from a practical standpoint, they are hard to separate. The discussion of SI is often overshadowed by the behavior of some (but certainly not all!) who practice it. It’s hard to be a credible public face of something that claims to be a scientific endeavor while, for example, conflating facts and opinions, being unclear what peer review means, misunderstanding what constitutes an experiment or observation, or — and I’m not kidding — challenging people to fights for disagreeing. (Since this article is about the practice and not the personalities involved, I’ve opted not to name names.)

Does it work?

Short answer: It depends.

Whether self-immunization works depends on how you define works. For any sufficiently specific definition of working, it should be possible to let data answer the question. Therein lies a central problem with SI today: As of the time of this writing, objective data on the subject are conspicuously thin, and this is especially remarkable given the extraordinary claims made in its absence. Not only are data lacking, but there’s not much to indicate that data-collection is getting any better.

However, it’s not necessary to abandon skepticism to concede that self-immunization appears to mitigate the effects of some at least some components of at least some venoms to a point where symptoms are reduced, perhaps even greatly reduced, possibly even to a degree that an otherwise potentially fatal bite is survived without antivenom. In the absence of real data, these are bold assertions, but they don’t conflict, in principle, with what’s known about immunochemistry: venom is introduced, B cells make antibodies against it, and those antibodies neutralize the toxins to which they’ve been raised.

Yes, it would be possible to fake the claimed results. For example, one could use venomoid snakes or snakes that were so unhealthy that their venom production was severely compromised. A more rigorous science observer might not be so generous, but I’ll take the risk of saying that I don’t think that outright deception like that is generally what’s happening.

Aside from the anecdotes of individual practitioners, belief in the potential protective capability of self-immunization is bolstered by various studies by the US military, including programs that tested immunization against the venom of Naja naja in humans (1963) and toxoids of Deinagkistrodon acutus, Bungarus multicinctus, Protobothrops mucrosquamatus, P. elegans and Trimeresurus stejnegeri in rabbits and mice (Yoshio Sawai, 1968), often cited as the “habu studies” along with its predecessors involving Protobothrops flavoviridis and Gloydius halys. (Taxons made current for clarity.) Each of these studies reported that immunization had some prophylactic value.

Not all venom toxins are created equal. Perhaps counter-intuitively, the simple toxicity (murine LD50) of a venom is almost certainly less important than what that venom does and how much of it there is. At least some neurotoxins seem to be mitigated by SI, and some toxins that affect blood coagulation might be as well. On the other hand, it seems highly improbable that even a high titer of antibodies would be a match for a massive dose of ferociously cytotoxic (tissue-destroying) venom from a large viperid like Bothrops or Bitis, which would completely overwhelm any antibodies in the tissue at the bite site.

At best, resistance is a better descriptor than immunity, and self-inoculation is a better use of the “SI” acronym than self-immunization.

So the interesting discussion is not so much around the century-old science of whether SI works, but rather whether there’s any legitimate application for it.

Is there any application for it?

Without dismissing it outright, the fact that hyper-immunity might be possible does not automatically make it the best option for protection against envenomation. Whether self-immunization is a good idea should be more a matter of data than opinion, but the dearth of data leaves opinions to fight for themselves.

Is it possible to construct hypothetical scenarios in which hyper-immunity might be useful? Are there situations in which the potential benefits outweigh the risks? Much of the difficulty in answering that question is that there is too little consensus on risks and too little high-quality data on the benefits.

The known risks are not trivial. They include the things we know venom can do, like cause kidney, liver and brain damage. How much damage can it do in tiny doses? Unknown.

There’s certainly the risk of miscalculating the dose, and this error has landed a handful of aspiring self-immunizers in the ER. As far as I am aware, it has not yet landed any in a grave, but that’s more a testament to their doctors’ heroics than to the safety or predictability of the practice.

There’s a risk of taking a more-severe-than-expected bite, overestimating one’s immunity, delaying treatment, and realizing too late how bad the bite was. Delays in treatment could easily lead to more complicated treatment, a longer recovery, and a higher probability of permanent injury, like loss of digits or worse.

There are other risks, like allergy, abscess, and bacterial or viral infection, and quantifying those risks is essentially impossible.

So is there any scenario where self-immunization is worth the risks, the pain, and the general unpleasantness of regular self-inoculation?

I know of several cases of venom-collection professionals who work with species for which there is no antivenom available, and in some of these cases, they work with species that can be extremely dangerous. The small handful of people who actually make a living extracting venom have, on average, about one accident every 30,000 to 50,000 extractions. In these cases, I could understand if these people reasoned that the potential benefit might outweigh the risk. However it is notable that none of those in the major private labs have chosen to self-immunize. All of the major private venom labs in the US — those with a statistical certainty of being bitten — opt for rapid antivenom rather than self-inoculation. Even in those instances where envenomation does happen, there is no clear evidence that the risk:benefit of SI is superior to rapid, well-rehearsed emergency response.

The situation Joe Slowinski faced on expedition in Myanmar is also cited as a possible application. Joe was surveying a remote area, days from medical care, when he was bitten by a small krait (Bungarus multicinctus). The team’s plan to equip themselves to manage such an accident fell apart on arrival in the country, and they decided to press on with the expedition regardless. Despite their heroic efforts, Joe’s team were not able to save his life, and he died the next day. Would self-immunization have saved him? There’s no way to answer that with any certainty. Some have cited Complete and Spontaneous Recovery from the Bite of a Blue Krait Snake (Bungarus Caeruleus) (1955) about Bill Haast’s survival of envenomation by a blue krait to suggest that it could have. But even if that were true, Slowinski’s situation was exceptional in every conceivable way, and it would be difficult to argue that self-immunization under his unique circumstances is a basis for more general application.

There are also cases where antivenom exists, but the person is allergic to it. Is self-immunization a solution in these cases? Again, that’s hard to say, but hospitals are equipped to manage anaphylaxis, and they are infinitely more rehearsed at doing that than they are at treating envenomation, especially exotic envenomation, deliberate or otherwise. It’s tough to make the case that self-immunization is the best way to manage these cases.

Each of these scenarios is highly unusual, and even for those cases, at the very least it would be reasonable to involve an immunologist with the training and expertise to direct and monitor the process.

So while there might be some theoretical application under some truly exceptional circumstances, in practice that’s not how SI is being used. More often than not, it’s being done to facilitate unnecessarily risky handling and demonstrate the ability to withstand intentional bites rather than protect against accidental ones.

There is a fatalistic — but patently wrong — saying among some amateur herp enthusiasts about being bitten that “it’s not a matter of if, but when.” This is simply false. There are well-established tools and techniques for safe, hands-off maintenance of venomous collections that reduce the risk of envenomation to nearly zero. There are plenty of examples of people who have worked with venomous snakes for 30 or 40 years (and more) without ever being bitten. There is no reason to consider accidents inevitable. They’re not. Therefore, SI as protection in the context of general husbandry is insurance against risk-taking that isn’t necessary to begin with. It is the herpetological equivalent of buying expensive, unnecessary insurance against your own drunk driving.

Dr. Bryan Fry summed it up nicely: “Indeed for most of the people self-immunising, a significant portion of their risk of envenomation comes when milking the snakes to obtain venom for self-immunisation. Circular logic at its finest.”

Ultimately, it’s hard to imagine any problem for which self-immunization is the best available solution or preferable to passive immunization with antivenom. The practice boils down to taking significant risks for benefits that are almost certainly unnecessary.

Are there other benefits?

Short answer: None have been demonstrated.

“The plural of anecdote is anecdotes, not data.”
— Dr. Bryan G. Fry

Beyond resistance to envenomation, SI discussions are riddled with wishful thinking and questionable claims about the supposed health effects of injecting venom. It’s easier to be unequivocal about these claims: There is no evidence whatsoever that the human body can somehow accept whole venom — a biocidal cocktail that evolved to kill things — and by some unknown mechanism, magically transform it for its own benefit. There is no support for the assertion that whole venom provides any health benefits whatsoever, either generally or as a treatment for any specific condition. (Immunotherapy with bee venom is beyond the scope of this article, but it’s a whole different process with different objectives.)

A popular response to this objection is something like, “But you can’t prove it doesn’t work!” Sorry, that’s not how evidence works. It’s actually the opposite of how evidence works. It is nonsensical to assert that venom might have <whatever> effect unless there’s some evidence that it actually does. This is critical-thinking 101: Absence of contradictory evidence is not evidence that all hypotheses are possible. It has not been proved that I cannot dead-lift 10x my own weight, but it’s not reasonable to assume that I might be able to do it just because ants can.

“But it did <whatever> for that guy!”

First of all, it probably didn’t do <whatever> for that guy. It’s more probable that <whatever> was a coincidence, a wrong observation, or an effect of some other cause that was wrongly attributed to venom. These stories don’t even make good anecdotes, let alone compelling evidence.

The fact that Bill Haast lived to be 100 years old (and reportedly was rarely ill) is frequently cited as anecdotal evidence that self-immunization could contribute to long life and all-around good health, but that’s a tenuous conclusion. Lots of people live to be 100, and none of them inject snake venom. The 2010 US Census reported more than 53,000 centenarians, and it’s probable that their longevity is attributable to well-understood factors like heredity, general health, weight, diet, activity and exercise, lifestyle, hygiene, stress, and community. The fact that one of these lucky, long-lived folks happened to inject himself with snake venom is not compelling evidence that the venom deserves the credit. This is confirmation bias. There are even occasional smokers who live to be 100, but nobody is in a hurry to credit tobacco for their longevity.

Still, there are adherents with unshakable belief that training (or “boosting!”) the immune system with venoms might have beneficial effects, despite the absence of any evidence to support this. Various other ideas — the notion that you can use venom to exercise the immune system like a muscle (a bad analogy), preserve youth, and boost your energy — have no scientific support whatsoever.

Has SI produced any new discoveries?

Short answer: No.

Long answer: Still no. The modern idea of using antibodies to deal with toxins and pathogens dates back well over a century, at least to the pioneering work of scientists like Edward Jenner (1749–1823), Albert Calmette (1863–1933), Vital Brazil (1865–1950), Clodomiro Picado Twight (1887-1944). While antivenoms have been improved and refined over the decades since they were conceived, the basic idea hasn’t changed: challenge an immune system with venom, allow it to produce antibodies, and then use them to treat someone poisoned with a venom those antibodies can deal with. Whether the antibodies are raised in a horse, a sheep, or a person, the basic idea is the same. SI today is doing little beyond re-creating immunologic effects that have been understood for over a century. It has not, thus far, contributed anything really new to the body of knowledge on the subject, and it appears unlikely to do so.

But could it? Possibly. Maybe. Who knows? SI raises some interesting questions. However, as it’s being done today, it makes no progress toward answering the questions it raises.