The Case Against Antibodies

Putting the “Antibody” Hypothesis on Trial

The deeper I looked into the methods of virology, a fundamental question began to take shape. If “viruses” had never been properly purified and isolated directly from host fluids in order to be independently manipulated and studied, how were “antibodies,”said to be anywhere from two to thirty times smaller, studied scientifically?

After all, “antibodies” are not treated as independent discoveries; they are defined, characterized, and validated within the very framework that presupposes the existence of the “viruses” they are said to target.

That question did more than challenge a technical detail; it forced me to reconsider the broader framework I had been operating within. In the past, I argued that vaccines were unnecessary because so-called “herd immunity” could be achieved through “natural infection” and the protection afforded by “antibodies,” all without rolling the dice on pharmaceutical products with potentially dangerous side effects. But as the “virus” lie became clearer to me, I grew increasingly uncomfortable grounding my position in a model that still depended on the existence of the very entity that had not been scientifically demonstrated.

At that point, I had not yet completely ruled out the possibility that other proposed “pathogens”—bacteria, fungi, parasites, and so on—might play some causal role in disease. The only conclusion I had reached with certainty was that “viruses” were not among them. But as the “pathogen” pillar began to crumble under scrutiny given the lack of scientific evidence supporting the concept, the larger structure built upon it began to unravel as well.

If there were no “pathogens” from which to be “immune,” then what exactly was the “immune system,” and what was it defending against? “Antibodies” were presented as a central component of that defense, described as highly specific proteins produced in response to foreign invaders, binding to them, neutralizing them, and marking them for destruction. These entities were portrayed as the very mechanism through which “immunity” and “protection” were achieved.

While the “antibody” narrative had already begun to crack for me due to previous research uncovering the paradoxical nature of HIV being diagnosed based on the detection of “antibodies,” as early as April 2020, I was questioning the legitimacy and logic of the broader “antibody” story. In a post on April 14th, I wrote:

Herd-immunity or no Herd-immunity. Antibodies equal protection or antibodies don’t offer protection. A person infected once won’t be reinfected or a person may be reinfected once again. What is the real answer?

No one knows.

This early uncertainty led me to conclude that if I truly wanted to understand what I regarded as the “virus” fraud, I would need to examine those small, Y-shaped entities with far greater scrutiny. To resolve these contradictions, I realized I needed to go back—not to modern immunology textbooks, but to the original experiments themselves.

Source: Nature https://share.google/8litDGHfd51Bkl5Md

To do so, I sought out timelines, such as the one above, tracing the history of “antibody” research so that I could return to the foundational papers upon which the concept was built. From there, I approached the literature just as I had with virology: with a critical eye focused on the logical structure and scientific methodology underlying the claims. I wanted to determine whether the research supporting the “antibody” theory could withstand careful examination.

What I found surprised me, even though, in retrospect, it should not have. The same core flaws I had identified in virology research were present throughout the “antibody” literature as well. As I worked my way chronologically through the historical record, it became clear to me that the “antibody” entity, as presented, was just as fictional as the “virus.” Unsurprisingly, each was being used to validate the existence of the other.

It was logically circular—like claiming Bigfoot exists because Unicorns are drawn to the giant ape, while simultaneously insisting that Unicorns exist because Bigfoot attracts the horned horses. One fictional entity cannot serve as proof of another, and vice versa.

For that reason, I systematically broke down the foundational evidence propping up “antibodies,” presenting it chronologically to expose its internal contradictions. My intention was to lay the case before the public. I placed the “antibody” on trial under the charge that it was a fictional construct born of artificial, lab-created effects—one presented as a biological protector, yet often used to justify interventions that did anything but protect.

​What follows are the summaries of my findings. I have laid out the exhibits, the contradictions, and the historical record. For those seeking a deeper examination, links to fully sourced and expanded analyses are provided. The case appeared clear to me, but I am only the prosecutor. The verdict is yours. You are the judge, the jury, and the final authority.

Examine the evidence carefully, question every assumption, and decide whether the story you have been told withstands the weight of reason.

This case traces the historical and methodological foundations of what are called “antibodies,” examining both their conceptual development and the experimental practices that claim to detect them. For clarity, the discussion is organized into a series of exhibits:

  • Exhibits A–M: Historical development, from Behring’s inferred anti‑toxins through early immunological theories and the evolution of “antibody” models.
  • Exhibit N: Questions of specificity, highlighting how presumed selective binding has been challenged in research.
  • Exhibit O: Cross-reactivity, including “Covid‑19” era findings that illustrate how “antibody” signals may bind to multiple unrelated antigens.
  • Exhibit P: The reproducibility crisis, showing inconsistencies in “antibody” research and their implications for scientific reliability.
  • Exhibits Q–R: Correlates of protection, emphasizing that the presence of “antibodies” does not necessarily indicate “immunity” or disease protection.

The Closing Statement ties these threads together, highlighting recurring patterns of inference, circular reasoning, and unverified assumptions that echo similar issues observed in virology. Readers can access more detailed articles by clicking on the Exhibit headings, with the exception of Exhibit B, which contains two articles linked in the subheadings. The exhibits are now before you. Carefully pore over the evidence and render your own verdict.

Exhibit A: Emil Von Behring’s Diphtheria/Tetanus Papers (1890) —Precursor to Antibodies

The 1890 experiments of Emil von Behring and Shibasaburo Kitasato are widely credited with giving rise to the concept of “antibodies.” A contemporary of Robert Koch’s, Behring’s work emerged during a period when bacteriology and immunology were still developing fields, and he hypothesized that unseen substances in the blood—later termed “antibodies”—protected animals from bacterial toxins. This conclusion, however, was not based on the isolation of a molecule, but on indirect evidence obtained by subjecting animals to chemical pretreatments designed to mitigate toxicity.

Behring utilized iodine trichloride and zinc chloride to treat both bacterial cultures and animals, gradually increasing toxin exposure. As the animals survived repeated injections, they appeared to develop a tolerance. While this was interpreted as the production of “immune substances,” Kitasato himself suggested a more biological alternative: the animals might simply be exhibiting adaptation or habituation to poison, much like the progressive tolerance observed in chronic alcohol consumption. Furthermore, Behring noted that species like mice and rats possessed a natural resistance to diphtheria regardless of dose, suggesting that host factors, rather than newly formed blood substances, could determine the outcome of exposure.

​The experiments did demonstrate a reproducible phenomenon: serum from pretreated animals could sometimes confer “protection” when injected into others. Behring interpreted this functional transfer as proof of a specific neutralizing factor. However, the serum was never purified or chemically characterized, and no discrete molecular entity was observed. The “anti-toxin” was an inference drawn from lab-created effects rather than an identified chemical reality. Additionally, the reliance on artificial exposure routes, such as subcutaneous and intraperitoneal injection, further distanced the laboratory model from natural biological processes.

Behring later acknowledged limitations in his own work, noting that the observed “immunity” was temporary and not necessarily attributable to specific, unseen components of the blood. He was explicit that conclusions drawn from his animal experiments should not be directly applied to humans. This caution was warranted, as early serum therapies developed from similar methods proved toxic in some cases, leading to severe reactions, including anaphylaxis and death.

Examining the foundational evidence for the “antibody” hypothesis reveals that these experiments assumed the existence of a substance to fill a theoretical gap. Behring’s interpretation of transferable protection as a “specific neutralizing factor” fit the era’s emerging bacteriological framework, which demanded specific solutions for specific toxins. Within this logic, the concept of the anti-toxin was formalized and reified long before the underlying molecular reality was ever demonstrated.

Exhibit B: Paul Ehrlich’s Side-Chain Antibody Theory (1900)

Dr Sam Bailey provided ten ways we hace been tricked by the CDC.

Another contemporary of Robert Koch, German physician and scientist Paul Ehrlich—later known as the “father of chemotherapy”—expanded on the concepts established by von Behring. In his 27-page presentation On Immunity with Special Reference to Cell Life (1900), Ehrlich proposed his Side-Chain theory and later developed ideas that contributed to what became known as the complement system. These foundational elements of immunology were born not from direct discovery, but from speculative theoretical constructs that were controversial even among his peers.

Part 1: The Side-Chain Theory

While working with Behring’s anti-diphtheria serum, Ehrlich developed a method to standardize the serum in units relative to a fixed and invariable standard. Previously, the anti-toxin in sera varied due to numerous factors, making measurement unreliable. This standardization became the basis for all future serum standardizations and led Ehrlich to develop his Side-Chain theory of “immunity.”

In this theory, Ehrlich proposed that cells possessed chemical “side chains” that bound toxins through a lock-and-key mechanism. Toxin binding, he argued, stimulated cells to overproduce these side chains, which were then released into circulation as “antibodies.” The problem was that this proposed model was hypothetical and metaphor-driven rather than derived from direct visualization or purification of any such structures. At the time, “antibodies” had not been isolated, structurally characterized, or independently demonstrated as discrete molecular entities. What Ehrlich provided to support his theory were illustrative diagrams that became known as his “pretty pictures:” Many contemporaries considered these images misleading rather than faithful representations of biological reality.

As such, Ehrlich’s theory was not universally accepted. Several contemporaries criticized it for being overly speculative and mechanistic, pointing to his lack of self-criticism and vivid imagination:

  • It relied heavily on chemical analogy, particularly the lock-and-key metaphor.
  • Some questioned whether “receptors” or “side chains” were real physical structures or merely conceptual devices.
  • Others favored physiologic or cellular explanations over Ehrlich’s chemical determinism.

Despite these objections, Ehrlich’s theory became institutionalized before direct empirical confirmation of the proposed structures.

Part 2: The Complement System Debate

The conceptual expansion continued with the discovery of “alexin,” a substance in serum described by Nobel-Prize-winning immunologist Jules Bordet that could rupture (lyse) bacteria. Ehrlich absorbed Bordet’s findings into his own framework, renaming the substance “complement” and framing it as a secondary component that worked alongside specific “amboceptors” (another Ehrlich-coined term).

This renaming was more than a semantic choice; it was a theoretical takeover. By redefining alexin as “complement,” Ehrlich effectively folded Bordet’s independent findings into his own side-chain system. Like the “antibody,” the “complement” was conceptualized and named long before it was purified or biochemically defined. The framework relied on the functional observation of bacteria bursting in a test tube, which was then used to infer a complex molecular machinery that remained invisible.

The early framework depended primarily on functional assays. Ehrlich’s foundational work developed early immunology around explanatory models that assumed the existence of unseen molecular mechanisms before those mechanisms were independently demonstrated. Thus, historical critics questioned:

  • The reification of theoretical constructs into assumed physical entities
  • The reliance on metaphor to explain complex biological processes
  • The absorption of competing findings into a dominant explanatory system

Ehrlich’s Side-Chain theory was a creative early model that influenced later immunological thinking by introducing concepts such as receptors, “antibodies,” and the idea that blood contains factors that can bind and neutralize foreign substances. Ehrlich abandoned the law of parsimony and used his “lively imagination” to dream up what he considered to be the most plausible and easy explanation for “immunity.” However, as Felix Le Dantec critiqued, “Ehrlich has added nothing to the explanation of the imaginary invalid.” His controversial contribution was establishing a “domain of invisible specimen behavior” that guided immunology, even though many details were later revised or replaced.

Exhibit C: The Antibody Equation (1929)

In 1929, Michael Heidelberger and Forrest Kendall introduced what became known as “the antibody equation.” This work sought to bring the precision of chemistry to the unobserved world of immunology. By treating antigens and “antibodies” as stoichiometric reactants that combined in measurable proportions, Heidelberger aimed to move the field beyond speculation and into the realm of quantitative science.

A key step in this development came in 1923, when Heidelberger and Oswald Avery published work on pneumococcal “antigens,” concluding that the specific antigenic substance of pneumococcus was a carbohydrate (polysaccharide). This finding later became foundational for Heidelberger’s argument that “antibodies” must therefore be proteins.

However, Heidelberger’s own 1923 report reveals a profound logical contradiction. He admitted that attempts to stimulate “antibody” production by injecting animals with this purified polysaccharide “yielded negative results.”

“Attempts to stimulate antibody production by the immunization of animals with the purified substance yielded negative results.”

By the very definition of immunology, a substance that fails to induce an “immune response” is not an antigen. Yet, Heidelberger continued to treat this purified carbohydrate as the specific “antigenic determinant” in his subsequent experiments.

The “antibody equation” was thus constructed through a process of circular inference:

  1. The Precipitin Test: Immune serum reacted with the purified polysaccharide in a test tube, forming a visible precipitate.
  2. The Assumption of Identity: Because a reaction occurred, the reacting component in the serum was assumed to be a specific “antibody.”
  3. The Calculation of Mass: Heidelberger measured the nitrogen content of this precipitate and, assuming “antibodies” were proteins, multiplied the nitrogen weight by 6.25 to estimate the total “antibody” mass.
  4. The Formalized Equation: These measurements were then used to build mathematical models describing the proportions of the “antigen-antibody” complex.

But this framework rested on several assumptions:

  • That the pneumococcal polysaccharide had definitively been established as the antigen, despite its failure to induce “antibody” production in animals.
  • That “antibodies” existed and were modified proteins and nitrogenous in nature.
  • That the precipitin reaction successfully separated specific reacting substances from closely associated non-specific serum material.
  • That earlier conclusions about the pneumococcus antigen were accurate and required no further independent confirmation.

Heidelberger assumed that “antibody” content could be calculated as nitrogen precipitable by specific polysaccharide, multiplied by 6.25 to estimate protein mass. He acknowledged that the precipitin test did not fully separate reacting substances from non-specific material with which they were closely associated, though he believed this limitation could be mitigated.

Thus, the “antibody equation” did not provide direct evidence of a discrete molecular entity. Instead, it quantified the debris of a laboratory reaction and labeled it “antibody.” While the math was precise, the ontological status of the entity remained unverified. Heidelberger’s work represents a sophisticated methodological pattern: the mathematical formalization of an assumption. By the end of the 1920s, immunology had achieved the appearance of a hard science by measuring the “domain of invisible specimens,” ensuring that the theoretical construct of the “antibody” remained institutionalized through the weight of numbers rather than the clarity of sight.

Exhibit D: Direct Template and the Theoretical Structure of Antibodies (1930-1940)

By the early twentieth century, the existence and mechanism of “antibodies” remained uncertain. Although serum reactions such as agglutination, precipitation, and neutralization had been repeatedly observed in laboratory experiments, the underlying entities presumed to produce these effects had not been isolated, directly visualized, or structurally characterized. As a result, immunologists attempted to explain these lab-created phenomena through a series of competing theoretical models. Over the following decades, multiple frameworks were proposed—each intended to account for the same experimental observations while addressing perceived shortcomings in earlier explanations. Rather than representing the progressive discovery of a clearly defined molecular object, early “antibody” research during this period was characterized by successive theoretical reconstructions designed to interpret the behavior of serum in experimental systems.

One of the first major alternatives to Paul Ehrlich’s Side-Chain theory emerged in 1930 when Fritz Breinl and Felix Haurowitz proposed what became known as the “Direct Template” hypothesis. They suggested that the antigen itself entered blood cells and acted as a physical template, shaping the formation of “antibodies” required for “immunity.” In this framework, “antibodies” were not pre-existing specific entities but proteins molded into complementary forms by direct interaction with the antigen that produced them. These models remained speculative, resting on chemical inference rather than direct structural demonstration.

​In 1940, Linus Pauling expanded this framework by applying his expertise in chemical bonding. Building on the work of Breinl and Haurowitz, he proposed that “antibodies” were polypeptide chains that folded around antigens to adopt complementary shapes. In doing so, he attempted to provide a structural and chemical basis for “antibody specificity,” integrating aspects of Ehrlich’s earlier “lock-and-key” imagery into a more explicitly molecular model.

While Pauling’s model provided a more sophisticated molecular vocabulary, it remained essentially theoretical. His results regarding the artificial production of “antibodies” proved difficult to reproduce, and were subject to criticism from contemporaries and funding institutions. His model was eventually undermined by emerging evidence that specificity arises from pre-existing genetic variation rather than antigen-directed folding.

Although his template model was eventually abandoned, Pauling is often credited with advancing the first detailed structural conception of these unseen entities. Historically, this era illustrates a pattern: functional phenomena were observed in the lab, and explanatory frameworks were constructed to account for them. The “antibody” concept was shaped by shifting interpretations of serum behavior long before a stable molecular account emerged. What later became regarded as a settled physical reality began as an evolving interpretive structure built upon observed laboratory effects.

Exhibit E: Merrill Chase Inadvertently Disproved Antibodies in 1942

The long-standing belief that circulating “antibodies” protect the body from disease guided early immunology, but a 1942 experiment by Rockefeller immunologist Merrill Chase, working with Karl Landsteiner, challenged that notion. Chase aimed to understand how “immunity” to tuberculosis could be transferred by isolating the supposed “antibodies” from sensitized guinea pigs and injecting them into non-sensitized animals. When those recipient animals were later exposed to the antigen (tuberculin), no hypersensitivity reaction occurred, indicating that the serum thought to contain “antibodies” did not transfer “immunity.” Surprisingly, when Chase used less thoroughly clarified samples that still contained white blood cells, the recipients did exhibit a hypersensitivity response. This led Chase to conclude that it was the cells—not hypothetical “antibodies”—responsible for transferring what was interpreted as “immunity” in these experiments, challenging the assumption that “antibodies” alone mediated “immune” responses.

Chase’s original 1942 paper with Landsteiner described how injecting peritoneal exudates from sensitized animals produced a skin reaction in normal animals, but that this effect was due to the sediment containing cells, not the clarified fluid supposedly containing “antibodies.” Astrid Fagraeus, who later proposed that plasma cells produce “antibodies,” examined Chase’s samples and found no plasma cells present, reinforcing that the induced hypersensitivity occurred without detectable “antibodies.” Even extensive extraction of spleen and lymph node cells failed to reveal any “antibodies” using the accepted assays of the time, yet the cellular preparations could transfer the hypersensitivity reaction. Rather than treating these results as a falsification of the “antibody-based” explanation for “immunity,” Chase and the field reframed the findings by proposing dual components of the “immune system”—an immediate, cell-mediated response and a delayed, “antibody-mediated” response—thereby preserving the theoretical role of “antibodies” despite contradictory experimental outcomes.

This pivotal moment marked the emergence of what became known as “cell-mediated immunity,” in which white blood cells, rather than circulating “antibodies,” were seen as central to certain “immune” phenomena. Chase’s work was later recognized as foundational for identifying these two arms of “immune” response, even though the original discovery contradicted the belief that “antibodies” alone conferred “immunity.” Rather than confronting the broader implications, that “antibodies” had not been demonstrated to produce “immunity” in these experiments, the field adapted its theoretical framework, preserving belief in invisible “antibody” entities and explaining away negative results by dividing “immunity” into cellular and humoral components. This reinterpretation amounted to a rescue device that allowed immunology to maintain the unproven role of “antibodies” despite evidence that they did not function as originally claimed.

Exhibit F: Antibodies, Plasma, and the Power of Correlation

By the late 1940s, the focus of immunology shifted from theoretical modeling to identifying the cellular site of “antibody” production. A pivotal moment occurred in 1947–48 when Astrid Fagraeus published her doctoral dissertation, Antibody Production in Relation to the Development of Plasma Cells, which earned her credit for showing that plasma cells produce “antibodies.” Before her work, the function of plasma cells was unknown. According to Advances in Immunology by Frederick W. Alt, Fagraeus performed in vitro studies with plasma cell–enriched spleen biopsies from horse serum–injected rabbits and found a direct correlation between plasma cell numbers in the tissue and the amount of secreted “antibodies” in the culture medium.

However, this conclusion relied on a series of experimental proxies. Because the “antibody” had not been isolated or purified, Fagraeus could not observe its production directly; instead, she used the quantity of plasma cells as a surrogate for the presence of a molecular entity. Her findings rested on the assumption that staining patterns in fixed, dead tissue sections accurately reflected the dynamic biological processes of a living organism. Consequently, the link between the cell and the product remained an inference based on correlation rather than a direct observation of secretion.

In 1955, Leduc and colleagues attempted to bridge this gap using immunofluorescence to “detect” antibodies within the lymph nodes of rabbits. While their work is often cited as definitive proof, the authors noted significant methodological hurdles, admitting that their histological demonstrations were at times clouded by “the confusion caused at times by the occurrence of non-specific reactions.” This suggests that the method lacked the absolute specificity required to distinguish the theoretical “antibody” from other proteins or background cellular noise.

​Albert Coons, a pioneer of immunofluorescence, further detailed these uncertainties in 1956. He emphasized that fluorescence signals could be triggered by stains reacting with unrelated blood proteins or “antibodies” from prior immunizations. Despite these confounding variables, inconsistent results were frequently attributed to “unknown variables” rather than inherent limitations of the indirect method itself. Throughout this period, techniques like immunofluorescence relied on the foundational assumption that the intensity of a fluorescent signal was a reliable proxy for “antibody” content, yet the “antibody” itself remained unisolated and uncalibrated against a known target.

Ultimately, the work of Fagraeus, Leduc, and Coons illustrates a recurring pattern in the field: the substitution of visual proxies for molecular proof. While plasma cells are clearly visible in tissue sections, their status as “antibody factories” was deduced from their proximity to fluorescent dyes and chemical reactions. Without the direct visualization of a purified, isolated molecule, the causal relationship between the cell and the “antibody” remained a sophisticated interpretation of laboratory-induced effects. This history highlights a critical distinction between observing a correlation and providing a direct molecular demonstration of a biological entity.

Exhibit G: The Indirect Template Theory of Antibody Production (1949)

From 1900 through 1949, multiple competing theories were proposed to explain how “antibodies” are formed, reflecting the fact that “antibodies” themselves had never been purified or directly observed in biological fluids. Early explanations such as Paul Ehrlich’s Side-Chain theory suggested that “antibodies” pre-existed in cells and were selected by antigens. In the 1930s and 1940s, the Direct Template theory advanced by Breinl, Haurowitz, and later Linus Pauling proposed that antigens physically acted as templates inside cells, guiding the folding of “antibody” proteins into complementary shapes. These models were attempts to explain specificity despite the absence of direct observation of “antibody” formation.

In 1949, Frank MacFarlane Burnet proposed the third competing theory with what became known as the Indirect Template theory. Rejecting the idea that antigens directly shaped “antibodies,” Burnet suggested instead that antigen exposure induced a heritable change in certain cells, permanently altering their synthetic machinery. He hypothesized the existence of “recognition units” and “genocopies” of antigenic determinants that became incorporated into the genome of stem cells, allowing “antibody” production to continue even after the antigen was gone. This theory shifted the explanation from structural molding to genetic reprogramming, but it still relied on hypothetical intracellular mechanisms that could not be directly demonstrated at the time.

Supporters argued that “antibody” production persisted after antigen clearance, interpreting this as evidence of inherited cellular change. However, much of the evidence was acknowledged to be suggestive rather than conclusive, and the mechanisms Burnet described remained theoretical. As research progressed and the consensus coalesced around the idea that “antibodies” of different specificities possess distinct amino acid sequences, aspects of both the Direct and Indirect Template theories came into conflict with emerging biochemical findings. Within a decade, Burnet himself abandoned the Indirect Template model in favor of what would become the Clonal Selection theory.

When one model failed, another speculative architecture took its place. It was a cycle of replacement rather than discovery. Each new framework claimed to solve the problem of “antibody” formation, yet each relied on the same foundation of inference. These shifting frameworks were not discoveries of a molecule, but expansions of a narrative—speculative structures designed to accommodate laboratory observations while the entity itself remained a mystery.

Exhibit H: The Natural Selection Theory of Antibody Formation (1955)

In 1955, Danish immunologist Niels K. Jerne introduced the “Natural Selection” theory of “antibody” formation, representing a fourth attempt to explain the origin of these still-unobserved entities. Challenging both the Direct and Indirect Template models, Jerne proposed that the body does not create “antibodies” in response to antigens. Instead, he hypothesized that a vast repertoire of “antibody-like” globulins already exists spontaneously in the blood. In this framework, an incoming antigen acts merely as a carrier, selecting and transporting a pre-existing globulin to a cell capable of reproducing it.

Jerne’s model replaced the concept of “antigenic molding” with a process akin to Darwinian selection. He argued that the antigen’s presence simply shifts the composition of circulating globulins by preferentially stimulating the replication of complementary configurations. This theory was constructed to account for laboratory phenomena, such as the “booster effect” and continued production after antigen clearance, without requiring the antigen to physically direct protein synthesis.

Despite its later influence, Jerne’s theory was recognized in its time as highly speculative. Critics, including Melvin Cohn, pointed out a fundamental biological flaw: the theory required protein molecules to direct the synthesis of identical proteins. This concept of self-replicating proteins was incompatible with the emerging understanding that protein synthesis requires a nucleic acid intermediary (DNA/RNA). Jerne’s model proved that in the search for the “antibody,” a sophisticated story was more valuable than a physical reality. It established a precedent where the theoretical narrative could simply override known biological laws to maintain the existence of an unobserved entity.

Exhibit I: The Clonal Selection Antibody Theory (1957)

In 1957, Frank Macfarlane Burnet proposed his second and most enduring framework: the Clonal Selection theory. This model emerged as a direct response to the inadequacies of the Direct Template, Indirect Template, and Natural Selection models, all of which had struggled to explain specificity without structural data. Clonal Selection reframed the problem entirely, asserting that “antibody” specificity is not created by the antigen, but rather selected from a pre-existing repertoire of cells. In this view, a vast array of distinct cells already exists before exposure; when an antigen enters the system, it simply stimulates the specific cells that already possess matching receptors, causing them to proliferate and secrete their respective products.

Burnet integrated earlier concepts of cellular diversity into a framework that accounted for observable phenomena like “immune memory” and the rapid rise in serum activity upon re-exposure. Within this model, antigen binding does not generate new specificity—it merely amplifies what is already present. By the early 1960s, Clonal Selection became the cornerstone of modern immunology, primarily because it aligned with emerging knowledge regarding cell proliferation and offered a more coherent narrative than the “self-replicating protein” paradox of Jerne’s previous model.

However, despite its conceptual dominance, critics noted that Clonal Selection remained a theoretical construct grounded in inference. At the time of its formulation, neither the diverse variety of receptors nor the specific intracellular process of “antibody” production had been directly observed or isolated. The model presupposed that cells arrive pre-equipped with specific receptors, yet the mechanisms generating this diversity remained speculative for decades.

Fundamentally, the rise of Clonal Selection marked a turning point in the field’s priorities. Despite the continued absence of direct physical evidence for the molecule it was meant to explain, the theory was widely accepted because it provided a coherent framework capable of organizing laboratory observations. In doing so, Clonal Selection helped establish a lasting precedent: that the acceptance of a model did not require the direct demonstration of the entity it described.

Exhibit J: Was the First Atomic Resolution Structure of an Antibody Fragment Published in 1973?

In the history of “antibody” research, 1973 is often cited as the date when the first atomic‑resolution structure of an “antibody” fragment—specifically the Fv fragment—was published, a milestone sometimes credited with advancing monoclonal “antibody” technology. However, available sources suggest the claim of “atomic resolution” is poorly supported and potentially misleading. A commonly cited 1972 paper supposedly represented this first structure, yet this is not corroborated by other historical accounts, including Wikipedia’s overview of “antibody” history, which mentions David Givol’s preparation and characterization of the Fv fragment but does not describe it as atomic resolution.

In structural biology, “atomic resolution” implies that individual atoms are clearly resolved in an electron-density map—a high technical bar that is difficult to achieve, as the techniques involved can often alter or degrade the sample. Rather than providing a direct visualization of an intact molecule, Givol’s work involved deconstructing a larger “Fab” fragment from mouse myeloma proteins into a smaller variable unit. Researchers then inferred that this fragment represented the minimal binding unit of a theoretical whole.

The resulting images were not direct photographs of a molecule in its natural state, but rather mathematical interpretations of diffraction patterns fitted into a hypothetical model. By focusing on fragments and building models from the “bottom up,” the research arguably overstated the structural certainty of the intact “antibody” molecule.

This case illustrates how structural biology terminology can be applied loosely to make research appear more definitive than it is, particularly when the subject has never been directly purified, isolated, or visualized as a whole. Like the theoretical frameworks that preceded it, the 1973 structural model relied on the interpretation of fragmented data and theoretical modeling. It continued the established methodological pattern: providing a sophisticated representation of a lab-created object while the intact, isolated entity remained beyond direct empirical observation.

Exhibit K: The Chemical Structure of Antibodies?

The question of the chemical structure of “antibodies” did not reach a widely accepted model until the mid-20th century. The work most often credited with resolving this problem came from the studies of Rodney Porter and Gerald Edelman, who were awarded the Nobel Prize in Physiology or Medicine in 1972 for their investigations into the structure of immunoglobulins. Their conclusions, however, did not arise from direct visualization of intact “antibody” molecules but from indirect biochemical experiments performed on proteins assumed to contain “antibody” activity.

Porter approached the problem by enzymatically digesting rabbit gamma-globulin with papain, which cleaved the protein into three major fragments. Two of these fragments retained antigen-binding activity, while the third did not. From this functional division, Porter inferred that the molecule contained multiple binding regions connected to a separate structural portion. Edelman, working through chemical reduction and separation techniques, proposed that immunoglobulins consisted of multiple polypeptide chains linked together by disulfide bonds. By combining fragment analysis with biochemical inference, both researchers constructed models for the overall organization of the molecule.

From these experiments emerged the familiar Y-shaped structural model of the “antibody,” composed of two heavy chains and two light chains joined by disulfide linkages, with antigen-binding regions located at the ends of the molecule’s “arms.” However, this interpretation was an assembly based on the behavior of pieces generated through the deliberate destruction of the protein. The model was deduced from how these fragments interacted after the molecule had been enzymatically dismantled, rather than from a direct structural determination of the intact whole.

This episode continued to reflect the broader pattern repeatedly seen in the development of “antibody” theory: functional phenomena observed in serum were interpreted through successive biochemical models attempting to explain how specificity might arise at the molecular level. The structural model was essentially a “bottom-up” fabrication—researchers worked backward from laboratory-generated effects to construct a molecular explanation that fit the observed behavior of broken parts. In essence, it was an attempt to manifest a whole by stitching together the behaviors of its mangled parts, effectively mistaking the results of biochemical destruction for the fundamental architecture of the living state.

Exhibit L: Köhler and Milstein’s Monoclonal Antibody Monstrosity (1975)

In 1975, the “antibody” narrative shifted from “observation” to manufacturing with the development of hybridoma technology by Georges Köhler and César Milstein. Their goal was to produce “monoclonal antibodies”—large quantities of identical proteins presumed to possess a single, defined specificity. To achieve this, spleen cells from mice were fused with immortal myeloma (cancer) cells, creating hybrid cells capable of indefinite growth and continuous secretion of immunoglobulin proteins.

The procedure was a feat of highly artificial laboratory manipulation. Spleen cells from mice exposed to sheep red blood cells were fused with myeloma cells using “viral agents” and cultured in a selective “HAT” medium. Researchers then screened the resulting clones using functional tests, such as plaque assays, to identify cell lines that appeared to produce proteins with the desired binding activity. Only a small fraction of these clones exhibited the intended specificity, which was once again inferred through binding assays rather than confirmed through the direct visualization of a discrete molecular entity.

From these experiments arose the idea that laboratories could manufacture uniform populations of “monoclonal antibodies.” In practice, however, the method did not involve isolating naturally occurring “antibody” molecules directly from blood or serum. Instead, it produced immortalized hybrid cell lines engineered to secrete immunoglobulin proteins under highly artificial laboratory conditions. The properties attributed to these proteins were interpreted from functional assays measuring precipitation, binding, or neutralization rather than from direct structural demonstration of intact “antibody” molecules.

With Köhler and Milstein, the “antibody” moved from a mystery to be solved to a product to be synthesized. The field effectively abandoned the search for a naturally occurring entity, choosing instead to define the “antibody” by its industrial output. In this era, the lab-created effect did not just explain the entity, it replaced it. Hybridoma technology therefore did not resolve the longstanding uncertainties surrounding the identity and structure of “antibodies.” Rather, it created a powerful system for producing proteins interpreted as “antibodies” on the basis of indirect functional behavior. In this sense, the rise of monoclonal “antibody” technology continues to illustrate how the “antibody” concept remained operationalized through experimental systems that inferred the existence and specificity of these entities from lab-created effects rather than from direct observation of a clearly defined molecular object.

Exhibit M: The Antibody™

The concept of the “antibody” eventually came to be represented by the familiar Y-shaped immunoglobulin model widely reproduced in scientific literature and textbooks. However, this representation did not originate from the direct visualization of intact “antibody” molecules. Rather, it emerged gradually from a series of theoretical interpretations and structural reconstructions developed to explain laboratory reactions such as agglutination, precipitation, and neutralization observed in serum.

Early diagrams depicting “antibodies” were largely conceptual. Researchers such as Paul Ehrlich illustrated hypothetical structures in order to explain how these unseen substances might interact with antigens, but these drawings were not intended as literal depictions of directly observed molecules. Instead, they functioned as explanatory models designed to account for the specificity of “immune” reactions inferred from experimental assays.

As biochemical and structural methods advanced during the twentieth century, researchers attempted to provide a more detailed account of “antibody” structure using techniques such as X-ray diffraction, electron microscopy, and later computational reconstruction. Yet, these methods did not involve observing “antibodies” in their natural state within blood or serum. Instead, they relied on highly processed samples—crystallized proteins or fragmented molecules—from which a structure could be mathematically inferred. The Y-shaped form was a “bottom-up” assembly, reconstructed from the behavior of fragments and the patterns of X-ray diffraction rather than the direct observation of an integrated whole.

Modern imaging continues this inferential tradition. Techniques such as cryogenic electron microscopy (Cryo-EM) and computational reconstruction involve intensive preparation, including chemical purification, staining, freezing, and the statistical averaging of thousands of individual data points. The resulting images are not straightforward captures of biological structures; they are high-tech reconstructions derived from processed data.

The modern image of the “antibody” is a theoretical model that solidified through decades of experimental interpretation. Rather than the discovery of a visible molecular object, the Y-shaped model represents a conceptual reconstruction—a visual shorthand for a century of biochemical inference and fragment analysis. It is a model built to account for laboratory effects, refined over time until the model itself was mistaken for the original, unobserved entity.

Exhibit N: Antibody Specificity?

​A central tenet of modern immunology is that “antibodies” possess a high degree of specificity, selectively binding to particular antigens. This presumed selectivity is the foundational assumption for nearly all serological diagnostics, including ELISA tests and immunoassays used to infer exposure to “pathogens.” However, the reliability of this assumption is frequently challenged by the persistent reality of cross-reactivity and experimental inconsistency.

Concerns regarding “antibody” specificity have been raised by researchers working directly with these reagents. For example, neuroscientist Clifford B. Saper noted that many “antibodies” used in research fail to demonstrate adequate specificity or reproducibility. In fact, he stated that there is “no such thing as a monoclonal antibody that, because it is monoclonal, recognizes only one protein or only one virus. It will bind to any protein having the same (or a very similar) sequence.” This lack of absolute specificity has led to the withdrawal of published papers after “antibodies” were found to stain tissues that completely lacked the intended target molecule.

In response to these concerns, Saper proposed a set of criteria intended to improve “antibody” validation. These included identifying the precise antigen used to generate the “antibody,” demonstrating that the “antibody” produces the expected molecular weight signal on a Western blot, and confirming specificity through rigorous control experiments such as testing tissues lacking the target protein. Even with these precautions, however, the interpretation of “antibody-based” staining remains uncertain, as cross-reactivity and off-target binding can still occur under many experimental conditions.

The problem of cross-reactivity, where “antibodies” bind to molecules other than the intended target, further complicates the interpretation of “antibody-based” assays. “Antibodies” raised against one protein sequence may bind to similar sequences found in unrelated proteins, producing signals that appear to confirm the presence of a specific target even when it is absent. Such effects can lead to false-positive results or inconsistent findings across experiments, contributing to broader concerns about reproducibility in fields that rely heavily on “antibody” reagents.

In effect, every conclusion regarding “antibody specificity” is inferred from indirect chemical reactions—fluorescence, color changes in an ELISA well, or the clumping of particles in an agglutination assay. The specificity attributed to these entities is not a directly observed physical trait; it is a statistical interpretation of experimental signals. In this context, the “antibody” remains an operational construct, its existence and behavior deduced from laboratory effects rather than from the unambiguous identification of the molecule itself.

Exhibit O: Corona Cross-Reactivity

The phenomenon of cross-reactivity has posed persistent challenges for serological studies during the “Covid-19” era. Numerous investigations have reported that “antibodies” described as specific to “SARS-COV-2” frequently bind to antigens from other “pathogens,” raising questions about the specificity of the assays used to detect them. In several studies, researchers tested IgG, IgM, and IgA responses against spike proteins from multiple “coronaviruses,” including “SARS-COV-1,” “MERS,” and seasonal strains such as “OC43” and “HKU1.” Rather than demonstrating strict specificity, the results showed that “SARS-COV-2 antibodies” often reacted with related “betacoronaviruses,” suggesting to them that the detected signals may reflect broader cross-reactive binding rather than unique recognition of a single “pathogen.”

This lack of specificity extends beyond the “coronavirus” family. For example, studies comparing dengue and “COVID-19” serology found that a substantial proportion of patients diagnosed with “COVID-19” produced positive results on dengue “antibody” tests. Conversely, sera collected from dengue patients before the “pandemic” sometimes reacted with “SARS-COV-2” serological assays. These findings indicated that similarities between antigenic regions of different “pathogens” may lead to “false-positive” diagnostic results, complicating both clinical interpretation and epidemiological estimates of “infection.”

Perhaps most revealingly, evidence of cross-reactive “SARS-COV-2 antibodies” was identified in blood samples collected as far back as 2011. Analyses of archived sera showed that a subset of individuals—including up to 44% of children—possessed “antibodies” capable of binding to “SARS-COV-2” antigens despite having no possible exposure to the “virus.” While these findings were attributed to prior “common cold infections,” they highlight a fundamental instability: the “antibody” is frequently found where the “pathogen” is not.

Further studies have suggested that cross-reactivity may extend beyond “viruses” altogether. Experiments involving monoclonal and polyclonal “SARS-COV-2 antibodies” reported binding interactions with a wide range of unrelated antigens, including bacterial proteins, vaccine components, and numerous food peptides. Sequence analyses revealed varying degrees of similarity between these proteins and segments of the purported “SARS-COV-2” genome, sometimes ranging from partial homology to substantial sequence identity. Despite demonstrating extensive overlap in potential binding targets, researchers often interpreted these findings as evidence of beneficial “cross-protection,” arguing that prior exposure to common microbes, vaccines, or dietary proteins could explain why many individuals experienced mild disease.

These observations highlight a central methodological tension in “antibody-based” studies. Cross-reactivity is simultaneously acknowledged as a source of diagnostic error and invoked as a mechanism of protective “immunity.” The same experimental noise that complicates a test is reinterpreted as a beneficial feature when it helps explain inconsistent clinical outcomes. This dual framing illustrates the flexibility of the “antibody” concept; the framework is often adjusted to accommodate conflicting data, raising significant questions regarding the falsifiability of its foundational claims.

Exhibit P: The Reproducibility Crisis in Antibody Research

The reliability of “antibodies” as experimental tools has become a focal point of the “reproducibility crisis” in biomedical research. Over the past two decades, the life sciences have been hampered by widespread difficulties in replicating published findings that rely on “antibody-based” assays. These concerns gained global attention following the work of John P.A. Ioannidis, who argued that methodological weaknesses and the lack of replication have rendered many published scientific findings incorrect.

“Antibodies” are among the most commonly used reagents in the life sciences, employed in techniques designed to identify or measure other molecules within biological samples. However, the assumption that these reagents perform with consistent specificity is often betrayed by their actual behavior. Batch-to-batch variability can occur even when “antibodies” are purchased from the same vendor under identical catalogue numbers, sometimes producing dramatically different staining patterns or experimental outcomes. In some cases, researchers have been forced to abandon projects after discovering that “antibodies” thought to detect a particular protein were actually binding to unrelated targets.

A frequently cited example involves a proteomics study that spent two years and approximately $500,000 investigating a proposed pancreatic-cancer biomarker, only to discover that the “antibody” used in the diagnostic kit did not recognize the intended protein at all but instead bound to a different molecule. Such experiences have led many scientists to question the reliability of “antibody” reagents and to advocate for more rigorous validation procedures before experimental results are interpreted as evidence of specific molecular interactions.

Researchers have also noted that “antibodies” frequently display cross-reactivity, binding to multiple proteins rather than exclusively to the intended target. These effects can generate misleading signals in assays such as Western blotting or immunohistochemistry, producing false positives or conflicting experimental results. The problem is compounded by inconsistent validation practices across manufacturers and laboratories, as well as by incomplete reporting of “antibody” characteristics in published studies.

Several structural factors have been proposed as contributing to the reproducibility problem. Polyclonal “antibodies,” which consist of heterogeneous mixtures of molecules produced in animals, can vary significantly between production batches. Even monoclonal “antibodies” produced through hybridoma technology are subject to genetic instability in the underlying cell lines, which can lead to changes in “antibody” composition over time. Combined with a lack of standardized validation across the commercial industry, these factors have led to a market saturated with poorly characterized reagents.

The reproducibility crisis underscores a critical methodological reality: many experimental conclusions rest upon assumptions about reagent performance rather than on the direct confirmation of molecular interactions. In this context, the “antibody” functions less as a precise molecular probe and more as a variable laboratory component whose specificity is often assumed but rarely verified. This ongoing uncertainty continues to challenge the foundation of “antibody-mediated” research and the conclusions drawn from it.

Exhibit Q: Do Antibodies Equal Protection? They Don’t Know.

The question of whether the presence of “antibodies” reliably indicates protection against disease remains a subject of significant uncertainty in immunological research. In many contexts, “antibody” levels are treated as a proxy for “immunity,” yet several studies and clinical guidelines acknowledge that the relationship between detectable “antibodies” and actual “protection” is not clearly established. For example, guidance cited from the American College of Physicians notes that there is limited knowledge about the association between “SARS-COV-2 antibodies” and “natural immunity,” including uncertainty about whether detected “antibodies” are protective, what quantity would be required for protection, and how long any such protection might last.

This ambiguity is often managed by invoking the complexity of the broader “immune” system. Because “antibodies” are only one theoretical component—alongside T-lymphocytes and innate cellular responses—the failure of a detected “antibody” to prevent disease is not interpreted as a failure of the theory, but as an indication that other “immune” mechanisms were insufficient. This multi-layered architecture ensures that the “antibody” concept remains protected from falsification; if a high “antibody titer” fails to provide protection, the focus simply shifts to cellular responses or other “immune” variables.

Diagnostic limitations further obscure the relationship between detection and immunity. It is widely admitted that “antibody” assays can produce false-positive or false-negative results due to cross-reactivity with unrelated “pathogens.” In the case of “coronaviruses,” the “antibodies” produced by “common cold” strains frequently react with “SARS-COV-2” assays. This overlap makes it difficult to determine whether a positive signal reflects specific “protection” or a non-specific reaction to a previous, unrelated environmental exposure.

Experimental models have reinforced the idea that “antibodies” may be neither necessary nor sufficient for survival. Studies in animal models have shown that survival after certain “viral challenges” can occur even in the total absence of “antibody” production. In such cases, the result is attributed to “innate” responses, such as those mediated by macrophages or interferons. This demonstrates a recurring pattern in the literature: the theory is designed such that it cannot fail. When “antibodies” are present but fail to protect, the “immune” system is called “complex;” when they are absent but the subject survives, the “innate system” is credited.

Ultimately, “antibody” measurements are interpreted within a flexible framework of “immune” responses rather than as definitive proof of protection. This highlights the “antibody” as a correlate whose meaning is determined by the narrative of the researcher rather than by a fixed, observable biological law. The measurement remains an interpretive act—a signal looking for a significance that is perpetually deferred to other, equally unseen, mechanisms.

Exhibit R: Antibodies — No Correlation of Protection

One of the central assumptions underlying the use of “antibody” tests is that the presence of these molecules corresponds to “protection” from disease. In immunology, this relationship is often described using the concept of a “correlate of protection (CoP), a measurable biological marker that is used to predict “immunity” against a particular “pathogen.” For a CoP to be scientifically valid, a defined threshold level would need to exist showing what quantity of “antibodies” consistently prevents “infection” or illness. However, for “SARS-COV-2,” such a correlate of protection has not been definitively established.

Public health authorities have acknowledged this uncertainty. Guidance cited from the American College of Physicians notes that the relationship between detectable “SARS-COV-2 antibodies” and “natural immunity” remains unclear. According to these statements, it is not known whether the presence of “antibodies” confers protection, what concentration might be required for protection, or how long any potential protection might last. As a result, “antibody” tests are not considered reliable indicators of “immunity” or predictors of susceptibility to “reinfection.”

This difficulty of establishing a correlate of protection is not unique to “SARS-COV-2.” Even for measles—often cited as an example of a “virus” with a well-defined “protective antibody threshold”—the evidence has been reexamined. For decades, a level of 120 mIU/mL of “measles virus antibodies,” measured by ELISA, has been widely used as a presumed correlate of protection. However, later analyses have questioned the strength of the evidence supporting this threshold. The original estimate was derived from a single, limited study examining “antibody” levels in individuals before and after a measles outbreak, and its accuracy has never been conclusively confirmed through rigorous, large-scale validation.

Subsequent reviews have highlighted significant methodological gaps. ELISA assays merely quantify a binding reaction; they do not assess functional properties like “neutralization” or “inhibition of attachment,” nor do they account for the role of cellular “immune” responses. Because these other variables remain incompletely defined, researchers have admitted that establishing a reliable CoP is far more difficult than the public narrative suggests. A systematic review of the 120 mIU/mL measles threshold concluded that the data supporting this cutoff is “sparse” and requires significant additional research to be considered a robust measure of “immunity.”

The reliance on “antibody” titers as a proxy for “immunity” thus rests on a series of unvalidated assumptions. Whether dealing with a “novel” entity or a disease long considered understood, the transition from an experimental signal to a biological “law” of protection is often based on limited data and interpretive convenience. Even in cases where a “protective threshold” is institutionalized, the underlying relationship between the measurement and the actual prevention of disease remains a theoretical construct rather than a demonstrated certainty.

Closing Statement

After tracing more than a century of “antibody” research—from Behring’s inferred anti‑toxins, to Ehrlich’s imaginative diagrams, to Heidelberger’s mathematical formalism, to the template theories, to Chase’s cellular findings, to the modern problems of specificity, cross‑reactivity, and reproducibility—a consistent pattern emerges. At no point in this historical record does the “antibody” appear as a directly demonstrated, purified, isolated, or structurally verified biological entity. Instead, we find a succession of theoretical frameworks constructed to explain laboratory phenomena produced under highly artificial conditions.

Each generation of researchers inherited the assumptions of the previous one, layering new interpretations on top of unproven foundations. When contradictions arose, the theories were revised, expanded, or replaced because the explanatory model required rescue. The “antibody” shifted from anti‑toxin, to side‑chain, to receptor, to template, to genocopy, to clonal selection, to modern immunoglobulin schematics, all without a single moment of direct empirical demonstration at the time those models were proposed. The history reads less like the discovery of a molecule and more like the evolution of an idea.

This pattern mirrors what I found in virology: indirect assays, circular reasoning, reliance on proxies, and the reification of theoretical constructs into presumed biological realities. The “antibody” and the “virus” grew up together, each used to justify the other, each dependent on the same methodological shortcuts and interpretive leaps. When one model faltered, the other was invoked to stabilize it. When both wavered, the framework was expanded to accommodate the instability.

The result is a system that appears internally coherent only because its components mutually reinforce one another. But coherence is not proof. Reproducible laboratory effects are not synonymous with demonstrated biological entities, and a century of inference does not transform a theoretical construct into a verified molecular reality.

My goal in assembling this case was not to tell you what to believe, but to show you what I found when I followed the evidence back to its origins. I approached the “antibody” story the same way I approached the “virus” story: by asking whether the foundational claims could withstand direct scrutiny. In both cases, the answer I found was the same.

But I am not the final authority. I have laid out the exhibits, the contradictions, the assumptions, and the historical record. The prosecution has rested its case.

The verdict is yours.

Leave a Reply

Discover more from ViroLIEgy

Subscribe now to keep reading and get access to the full archive.

Continue reading