Connect with us

Alternative News

Humanity Has Become One Massive Genetic Experiment: What Everyone Should Know About GMOs

Published

on

Are you concerned about Genetically Modified Foods? Here’s (GMOs Revealed) a great documentary that addresses many of the questions and concerns most people have today. 

advertisement - learn more

In March 2014, scientists from Indiana University announced that they had conducted research to examine the operations of the fruit fly genome “in greater detail than ever before possible” and had identified “thousands of new genes, transcripts and proteins.” Their results indicated that the fly’s genome is “far more complex than previously suspected and suggests that the same will be true of the genomes of other higher organisms.” Of the approximately 1,500 new genes that were discovered, 536 of them were found within areas that were previously assumed to be gene-free zones. Furthermore, when the flies were subjected to stresses, small changes in expression level at thousands of genes occurred, and four newly modelled genes were expressed altogether differently.

Why is this important? Because it reveals how little we know about this planet and the organisms dwelling on it, yet also how much we think we know. This kind of hubris is found within all areas of human knowledge, but particularly when it comes to science.

Another great example that I’ve used before is when the populace first realized that the Earth wasn’t flat. Another is a statement made by physicist Lord Kelvin, who stated in 1900 that “there is nothing new to be discovered in physics now. All that remains is more and more precise measurement.” This assertion was shattered only five years later when Einstein published his paper on special relativity.

When it comes to our genes, and the genes of other organisms, we really do know next to nothing. Unfortunately, proponents of the biotech industry (Monsanto, DuPont, Syngenta, etc.) claim otherwise, and have developed multiple, flawed assumptions that undergird agricultural bioengineering.

The information presented in this article comes from a variety of different sources, but my primary sourceis Steven Druker, a public interest attorney and the Executive Director of the Alliance for Bio-Integrity. He initiated a lawsuit in 1998 that forced the U.S. Food and Drug (FDA) to release its files on genetically engineered foods, and recently published a book about it, which has received dozens of rave reviews from the world’s most accredited scientists in the field. I draw primarily from his book for this article.

advertisement - learn more

“This incisive and insightful book is truly outstanding. Not only is it well reasoned and scientifically solid, it’s a pleasure to read – and a must-read. Through its masterful marshalling of facts, it dispels the cloud of disinformation that has misled people into believing that GE foods have been adequately tested and don’t entail abnormal risk.” 

– David Schubert, PhD, molecular biologist and Head of Cellular Neurobiology, Salk Institute for Biological Studies.

Natural Genetic Modification Versus Human Induced Genetic Modification

Biotech proponents have an unshakable faith in their GE crops, and these corporations also hold major sway over mainstream media outlets, and close relationships with government agencies like the FDA. Indeed, several high level industry employees have also held positions at these institutions. One example is the FDA Deputy Commissioner for Foods, Michael Taylor, who is also Monsanto’s former Vice President for Public Policy. While at the FDA, he was instrumental in getting approval for Monsanto’s genetically engineered bovine growth hormone.

Druker outlines in his book how the commercialization of genetically engineered foods was enabled by the fraudulent behaviour of these government agencies, and how this actually violates explicit mandates for federal food safety law. The evidence shows that the “FDA’s falsehoods have been abundantly supplemented with falsehoods disseminated by eminent scientists and scientific institutions, and the entire GE food venture.”

This is why it’s so amazing to see so many scientists within the field supporting the dissemination of truth, and bringing the falsehoods to light. So if you still think this type of thing is a conspiracy theory, we now have the documents as well as the science, which stands on its own, to show that something is terribly wrong here.

Joseph Cummins, Ph.D. and Professor Emeritus of Genetics at Western University in London, Ontario, believes that Druker’s book is a “landmark” and that “it should be required reading in every university biology course.” 

There are several presumptions on which the bioengineering venture was based, and one of them is that natural breeding is more random and unruly than bioengineering. The standard argument holds that genetic modification has been occurring for thousands of years, and what we do now is simply that process sped up and made better.

Key Presumptions on Which the Bioengineering Venture Was Based

Genetic engineering is based on the presumption that the genome is just a linear system, where the action of a single gene will not impact the action of other genes, or disrupt their normal function.

In 2007, the New York Times published an article outlining how “the presumption that genes operate independently has been institutionalized since 1976, when the first biotech company was founded. In fact, it is the economic and regulatory foundation on which the entire biotechnology industry is built.” 

Basically, genes are viewed as autonomous, adding to the whole without acting holistically because they don’t express their proteins in a closely coordinated matter. Another assumption used to justify genetic engineering is that genes aren’t organized in a specific way, that the sequence in which they occur is meaningless From this point of view, a gene would function normally if it were relocated to a different chromosome or came from a neighbouring gene. Quite a big assumption, don’t you think? Giorgio Bernardi, a biologist at the University of Rome III who specialized in the study of genome evolution, calls this perspective a “bean-bag view of the genome” because it regards the genes as “randomly distributed.”

Druker explains:

Together, these two assumptions supported the belief that a chunk of recombinant DNA could be put into a plan’s genome without inducing disturbance — because if the behavior of the native genes was largely uncoordinated and their arrangement was irrelevant, there would be no important patterns that could be perturbed by such insertions. Accordingly, they engendered confidence in the precision of genetic engineering, because they implied that the outcome of a gene insertion would be exactly what the bioengineers expected.

How could biotech proponents push the idea that the target organism would continue to function just as it had before, and that the change would be limited to the new trait endowed by the inserted gene? How can it simply be assumed that this would not alter any of the organism’s other qualities?

These presumptions still underly genetic engineering today. The example of the fly above serves well here. In the New York Times article cited earlier, the author noted that “genes appear to operate in a complex network,” and states that “evidence of a networked genome shatters the scientific basis for virtually every official risk assessment of today’s commercial biotech products, from genetically engineered crops to pharmaceuticals.”

Molecular geneticist Michael Antoniou, who testified at New Zealand’s Royal Commission in 2001, notes that agricultural bioengineering “was based on the understanding of genetics we had 15 years ago, about genes being isolated little units that work independently of each other.” He also presented evidence showing that genes actually “work as an integrated whole of families.”

Despite the grave possibility that these presumptions are indeed wrong, they still form the backbone of genetic engineering today.

Antoniou himself was even selected to represent multiple nongovernmental organizations to present precaution reasons to the UK’s GM Review Panel, and a plethora of studies that clearly justify it. Despite his presentation, and many others’, the 11 other scientists on the panel, who were biotech proponents, dismissed these studies and continued to argue that it makes absolutely no difference how genes are arranged.

How can a scientist make such a statement?

What do we have as a result? As Druker says:

Such disregard, denial, or avoidance in regard to the evidence was essential for maintaining faith in the venture, because its predictability and safety have always relied on the genome being largely disjointed; and the more the genome instead appears to function as a tightly coordinated system, the more potentially disruptive and unpredictable are the interventions of the bioengineers.

Geneticist, activist, and environmentalist David Suzuki weighed in on this very subject a few years ago in an interview with the Canadian Broadcasting Corporation (CBC):

By slipping it into our food without our knowledge, without any indication that there are genetically modified organisms in our food, we are now unwittingly part of a massive experiment. . . . Essentially, the FDA has said that genetically modified organisms, or food, are basically not much different from regular food, and so they’ll be treated in the same way. The problem is this: Geneticists follow the inheritance of genes, in what we call a vertical fashion . . . [but] what biotechnology allows us to do is to take this organism, and move it, what we call horizontally, into a totally unrelated species. Now, David Suzuki doesn’t normally mate with a carrot plant and exchange genes. What biotechnology allows us to do is to switch genes from one to the other, without regard for the biological constraints. . . . It’s very very bad science. We assume that the principals governing the inheritance of genes vertically applies when you move genes laterally or horizontally. There’s absolutely no reason to make that conclusion.

More Differences

This is a common argument made by GE-food proponents, and commonly used whenever an expert brings up a challenge to the technology’s safety. For example, David Schubert, PhD, a molecular biologist and the Head of Cellular Neurobiology at the Salk Institute for Biological Studies, commented in Nature Biotechnology that there was mounting evidence that the insertion of even one gene into a cell’s DNA alters the expression patters of genes throughout the entire cell. He said facts like this one, among many others, “cast doubt on the soundness of agricultural bioengineering — and entail the conclusion that it ‘is not a safe option.’ “

Predictably, when a professor and a laboratory director of one of the world’s most prestigious scientific institutions makes a comment like this, there’s going to be a response. This time it came in the form of a letter, published by 18 biologists at respected universities and institutions, stating that Dr. Schubert failed to properly consider “the genetic realities.” The main reality he allegedly failed to recognize is that the natural method of plant breeding is inherently more random than bioengineering.

A portion of the letter reads as following:

We do not take issue with Schubert’s basic contention that unintended genetic and metabolic events can take place. The reality is that ‘unintentional consequences’ are much more likely to occur in nature than in biotechnology because nature relies on the unintentional consequences of blind random genetic mutation and rearrangement to produce adaptive phenotypic results, whereas GM technology employs precise, specific, and rationally designed genetic modification toward a specific engineering goal.

In his book, Steven Druker offers the following counterargument: “This letter thus reveals how strongly the GE food venture relies on the presumption that the natural process driving biological development are intrinsically more disorderly and risk-bearing than the genetic interventions instigated by the human mind. And it confirms that this belief forms the ideological bedrock on which the venture rests.”

In fact, a report published in 2004 by the National Academy of Sciences couldn’t uphold “even the more modest notion that bioengineering and natural breeding pose the same risks.” The panel that produced the report ranked various modes of plant breeding in terms of their disposition to produce unintended effects. They were forced to acknowledge that bioengineering produces far greater effects than pollen-based sexual reproduction. Despite this fact, they still insisted that this does not mean a difference in risks.

Druker says in response:

Thus, there’s no rational way to reconcile the fact that natural breeding is less disruptive and more predictable than bioengineering with the claim that it poses equal or greater risk, which is why the admission in the 2004 report is a rarity — and why biotech proponents almost always ignore or deny that fact and instead assert that natural breeding is more disorderly and unpredictable.

Randomness

According to the biotech industry, natural plant breeding could actually result in crops that are dangerous to human consumption, which is why we should be grateful for genetic engineering. For example, in the same NAS report mentioned above, they portrayed what are known as “jumping genes” as more randomly mobile and threatening, but failed to recognize, as Druker points out, that although these entities do not pose risks within natural pollen based breeding, when bioengineering is employed they do because that process alone “tends to stir them up and get them jumping.”

When it comes to sexual reproduction, it’s yet another area where biotech proponents state that it’s a random phenomenon, despite the fact that we now know that it’s not random, and that there are multiple factors that can and do influence the genetics of life.   Genetic engineering, be it human induced or naturally occurring, requires a genetic “rearragnement,”  a recombination of DNA. The difference between the artificial way and the natural way is that the natural way does not disrupt the entire organism, as was discussed a little earlier in the article and touched upon in the Suzuki quote above.

As Druker explains:

This natural form of recombination occurs during the formation of gametes (the sperm and egg cells). It includes a step called crossover in which two partner chromosomes break at corresponding points and then exchange complementary sections of DNA; and every time a gamete is produced, every set of paired chromosomes engages in it. In this way, all the chromosomes end up with genes from both parents instead of from only one. However, all the genes are preserved, as is the sequences in which they’re positioned. The only changes are in the relationships between aleles. . . . So this natural recombination augments diversity while maintaining stability. And without it, except for the occasional favorable mutation, the composition of chromosomes would stay the same from generation to generation, and genetic diversity would grow at far too sluggish a pace.

He goes on to mention how natural recombination preserves the order of the genes, and is predictable in the way it cuts DNA. The entire process displays a great deal of order.

Despite this fact, scientists who support GE state, as in, for example, the 2004 NAS report, that “genetic engineering methods are considered by some to be more precise than conventional breeding methods because only known and precisely characterized genes are transferred.” They use the idea that the randomness and unpredictability of natural engineering make bioengineering safer.

Yet, as Druker so brilliantly captures:

This misleading tactic fixates on the predictability of the plant’s specific agronomic traits; and it portrays traditional breeding as less predictable than bioengineering because undesired attributes are often transferred along with the one that is desired. However, those who employ this ploy don’t acknowledge that if both parents are safe to eat, the unwanted traits hardly ever pose risk to human health. Rather, they’re undesirable for reasons irrelevant to risk (such as aesthetic appearance or seed size), and breeders must then perform back-crossing to eliminate them while retaining the trait they want. However,  although the inclusion of unwanted traits entails more work, it does not increase attendant risks. Therefore, while breeders can’t fully predict what traits will appear, they can confidently predict that the resulting plant will be safe to eat.

This is why the GE stance on natural modification is so flawed and misleading.

Druker goes on:

Although it describes the sexual reproduction of food-yielding plants as a messy and risky affair that involves the transfer of “thousands of unknown genes with unknown function,” we actually know quite a lot about those genes. And what we know is far more important than what we don’t know. We know that they’re all where they’re supposed to be, and that they’re arranged in an orderly fashion. And we know that during the essential process in which some of them are traded between partnered chromosomes in order to promote the diversity that strengthens the species, their orderly arrangement is marvelously maintained. Most important, we know that their functions mesh to form an exquisitely efficient system that generates and sustains a plant that regularly provides us with wholesome food.

This sharply contrasts with genetic engineering.

As you can see, comparing natural modification to biotech modification is not an easy process, and this isn’t even the tip of the iceberg. Research shows that it’s not natural modification that’s more random and risky, but biotech genetic modification:

The inserted cassettes are haphazardly wedged into the cell’s DNA, they create unpredictable disruptions at the site of insertion, the overall process induces hundreds of mutations throughout the DNA molecule, the activity of the inserted cassettes can create multiple imbalances, and the resultant plant cannot be deemed safe without undergoing a battery of rigorous tests that has yet to be applied to any engineered crop.

RELATED CE ARTICLES: 

Below are a few of many articles we’ve published on GMOs, if you’re interested in reading more please browse through our website.

Reviewed Science Loosing Credibility As Large Amounts of Research Shown To Be False

Wikileaks Cables Reveal The US Government Planned To Retaliate Cause & Cause Pain On Countries Refusing GMOs

Federal Lawsuit Forces The US Government To Divulge Secret Files On Genetically Engineered Foods

New Study Links GMOs To Cancre, Liver/Kidney Damage & Severe Hormonal Disruption

Why Bill Nye Is Not A Science Guy: What He Gets Wrong About GMOs

You Can Help Stop The 5G Infrastructure

We plan to investigate the telecom industry, it’s ties to politics, and expose its efforts to push 5G while ignoring the dangers and without proper safety testing, but we can't do it without your support.

We've launched a funding campaign to fuel our efforts on this matter as we are confident we can make a difference and have a strong plan to get it done.

Check out our plan and join our campaign here.

Advertisement
advertisement - learn more

Alternative News

The Anatomy of Conspiracy Theories

Published

on

Whether you believe in conspiracy theories or not, we can all agree that the use of the term has exploded in media and in conversation. The question is, why? Are we now using the term “Conspiracy Theory” more indiscriminately and on more platforms than previously? Are we, as a society, simply becoming unhinged and absurd? Are seemingly nonsensical stories, for some unknown reason, starting to resonate with people? Or are some conventional narratives getting challenged because some of these “alternative” explanations are in fact accurate, despite the fact that conventional sources refuse to acknowledge them as even potentially valid? Notice that the last two possibilities are different sides of the same coin. If you think  “conspiracy theorists” are unhinged, it is highly likely that they are suspicious of your sanity as well. Both sides insist that they are right and that the other has been hoodwinked. Note that if you choose to not pick a side, you are, by default, allowing the conventional narrative to perpetuate. That is how convention works. 

Merriam-Webster defines the term conspiracy theory as “a theory that explains an event or situation as the result of a secret plan by usually powerful people or groups”. The key elements of this definition remain consistent across all authoritative lexicons: the group responsible for an event must be powerful and covert. However, if we refer to the Wikipedia definition as of 11/2018 a new element emerges: “A conspiracy theory is an explanation of an event or situation that invokes a conspiracy—generally one involving an illegal or harmful act supposedly carried out by government or other powerful actors—without credible evidence.”

When an explanation is labeled a “Conspiracy Theory,” by today’s definition, it has no evidence to support it. An explanation with no supporting evidence is a hypothesis, not a “theory.” “Conspiracy Theory,” as it is used today, is thus an oxymoron. These “Conspiracy Theories” we seem to hear about everyday should really be called “Conspiracy Hypotheses.” More concerning is that the “Conspiracy Theory” label identifies an explanation as inherently baseless. Given this linguistic construct, where is there room for a conspiracy that is in fact true?

There is also something troubling about using the term “credible” in the definition of conspiracy theory. Legally, evidence that is credible is that which a reasonable person would consider to be true in light of the surrounding circumstances. If evidence suggests an explanation that seems at the surface to be unreasonable, how does a reasonable person avoid automatically labeling the evidence not credible? If we are not careful, the credibility of the explanation and resultant conclusions would then determine the credibility of the evidence that supports it. Is this really so important? Perhaps you are quick to see that with this approach, our understanding of what is true and real can never evolve. If any evidence arose that radically disproved our understanding or eroded our faith in trusted institutions we would automatically discard it as “not credible” and remain entrenched in our accepted paradigm. “Credible” evidence cannot be a necessary requirement of a theory that challenges what is credible to begin with.

To better illustrate this, let us consider an old but very real “conspiracy theory.” About 400 years ago, European civilization was emerging from centuries of scientific and philosophical stagnation known as the dark ages. What more befitting a place for such a renaissance to occur than the center of the universe? You see, the idea that the Earth was one of eight planets revolving around a star that is orbiting the center of one of hundreds of billions of galaxies would have been absurd in Europe in the sixteenth century. Any sane person could see that the Sun and the Moon and every celestial body rises in the East and sets in the West. At that time, if someone went about proposing the idea that everything rises and falls because the Earth was spinning, they would have been laughed out of the tavern. Would that person be a conspiracy theorist? They are not proposing that “powerful actors are carrying out a harmful act,” they are merely suggesting an alternative explanation for what is observed. However, the implication of their suggestion seems to incriminate the authority on such matters as ignorant of the truth or, possibly, the perpetrators of a lie. The possibility of a conspiracy has now been introduced.

Now, let us say that this person claims to have proof of their absurd theory. Would you have taken the time to examine the evidence or would you have been more likely to dismiss them without further consideration? The very idea that they could be right would have been not just silly or heretical, but inconceivable to many, if not all. How could the evidence be credible if it implied something inconceivable? Dismissing their idea would have seemingly been the most logical and, therefore, the smartest thing to do.

advertisement - learn more

When Galileo Galilei appeared in 1610 armed with a rudimentary “telescope,” few would peer into it. He claimed that the refractive properties of the pair of “lenses” would allow you to see things at great distances very clearly. With it one could see Jupiter and its moons revolving around the giant planet just as our moon revolves around Earth. How enchanting! The difficulty would arise when you put the telescope down: your feet would no longer be planted on the previously immovable center of creation. Would you have looked into his telescope? What would have been the harm in taking a peek? Certainly the fear of being proven more gullible than most would have been on your mind. What about the fear that he might be right?

Imagine what must have been going through Galileo’s mind after his monumental discovery. He saw irrefutably that the entire model of the universe had been completely misconceived. One just has to look. Most did not. I can only imagine how hard he must have tried to convince anyone to simply stop, look and listen to what he had discovered. At the time, Galileo was the Chair of Mathematics at the University of Padua and had previously held the same post at the University of Pisa. Despite his bonafides and reputation as a solid contributor to the Italian renaissance, his discovery would likely have died in obscurity if it weren’t for the support of an influential family, the Medicis, who offered Galileo a platform from which he could spread his theory. It was only through allying himself with political power that he was able to slowly generate interest in his heliocentric model of the solar system. His proposition eventually caught the attention of the Catholic church, who initially warned him to desist. Eventually, he was brought to trial in the Roman Inquisition 23 years after his discovery. At the age of 70, the intrepid mathematician and astronomer was allowed to return home if he agreed to recant his story. Instead Galileo chose to spend the rest of his years in prison because he believed that that would be the only way to get people to open their eyes.

Did it work? It did not. Galileo died incarcerated while Europe continued to slumber under stars that moved around them. By today’s standards, Galileo would have been labeled a Conspiracy Theorist from the day he announced his findings until he was proven right fifty years after his death.  When the Principle of Gravitational Attraction eventually became widely accepted as true, the church had to retract their position because the motions of the stars and planets could not be explained under Newton’s laws. 

On the other hand, Galileo is credited with being the father of not only observational astronomy, but of the scientific method as well. The scientific method demands that one tests an explanation without bias towards an outcome. All data is considered before deductions are made. When all other explanations have been proven wrong, the only explanation remaining becomes a theory. The theory persists as long as all subsequent experiments continue to uphold it. This is how we ultimately know what we know and have an inkling of what we don’t. If I had to choose a posthumous title for myself, “The Father of the Scientific Method” is one I could die with. Galileo is credited with this honorific not only because he valued it more than his freedom, but because he had the discipline to regard evidence objectively despite how unimaginable the implications were. This is how a body of knowledge expands. By considering the validity of the evidence first, we then can accept what was previously unimaginable, otherwise what we know tomorrow will be no different than what we know today.

All conspiracy theorists are not Galileos. Neither are all conspiracy theories true. However, can we be certain that all of them are false? At their very core, all conspiracy theories directly or indirectly point at a central authority acting covertly and simultaneously at the media for either missing it or looking the other way. This, of course, is unimaginable, as we all know the government can make mistakes but would never do anything intentionally harmful to its citizens and then hide it. Even if they did, somebody would come forward and the media would let us know about it. This is why such a deception could never occur. The idea that your lover could be in bed with your best friend is inconceivable. Evidence of such a thing would not be credible. Dismissing all conspiracy theories seems logical and therefore seems like the smartest thing to do. 

In “Sapiens”, Yuval Harari proposes an explanation for why our species, Sapiens, out fought, out thought and out survived all other Homo species on the planet. He suggests that it was our unique ability to describe and communicate situations and events that had no basis in reality which set us apart. In other words, we could tell stories and they could not. By uniting under a common idea, story or even myth, thousands (and now thousands of millions) of Sapiens could come together with a shared purpose, identity or belief system to disband our cousins who were as individuals more sturdy and just as cunning but not nearly as good at cooperating as we were. This advantage, Harari proposes, has not only led our species to eventual supremacy over all others, but has also allowed us to form communities, governments and global alliances. 

Siding with the majority has served us well–until it hasn’t. One only needs to revisit the history of Galileo and basic astronomy to understand this. In actuality, the first observant minds woke up to the fact that the Earth went around the sun and not the other way round nineteen centuries before Galileo did. The Greek mathematician, Aristarcus, is thought to be the first Western person to place the Sun in the middle of a “solar system” in 270 BC. A human being traveled to the moon just 360 years after Galileo “discovered” what Aristarcus had shown nearly two millennia before. How many centuries was this journey delayed because an alternative explanation in ancient Greece became a “conspiracy theory” against authority and convention?

This poses an intriguing question. Is there something hardwired in our behavioral patterns that push us towards conformist narratives and away from alternative ones at a precognitive level? Is it this tendency that gave rise to our enhanced ability to unite that keeps us in “group-think” more than we should be? How do we know we are looking at the world objectively and rejecting alternative belief systems from a purely rational basis? How does one know whether one is biased or not?

One way is to apply the scientific method. The scientific method demands that every possibility, no matter how outlandish, is tested for its veracity and dismissed only when it can be proven wrong. Without this objective pursuit of truth, misconceptions can persist indefinitely, just as the geocentric model of the universe did. Interestingly, Aristarcus was allowed to retain his theory because he lived at a time and place where philosophers, mathematicians and scientists were revered, protected and free to pursue their notions. The freedom ancient Greek society afforded its scientists only endured for a few centuries after Aristarcus lived. In Galileo’s day, the Roman Catholic church had been presiding over such things as facts for well over a thousand years. His incontrovertible proof was suppressed by the power that had the most to lose.

These days, establishing the facts of the matter may not be as easy as we presume. Conspiracy theorists claim to have proof just like the debunkers do. How do we know that the proof offered on either side is valid? Who has the time to apply the scientific method? It certainly seems safer to go with the conventional narrative because surely there are more rational minds in a larger group. Though it seems a reasonable approach, it may be in fact where we misstep. By deferring to others, we assume the majority will arrive at the truth eventually. The problem is that those in the majority who are trained to examine evidence objectively often must take a potentially career-ending risk to even investigate an alternative explanation. Why would an organization be willing to invest the resources to redirect their scientific staff to chase down and evaluate evidence that will likely endanger their reputation with the public without any upside? Thus, conventional narratives survive for another day, or in the case of an Earth-centered universe, for a couple of thousand years.

Whether or not you are not a “conspiracy theorist” we can all agree that there is a possibility, however slight, that some conventional narratives could be wrong. How would we know? Is there a source that we can trust 100%? Must we rely on our own wits? A short inquiry into this question can be disquieting. Most of us must admit that our understanding of history, science and geopolitics are merely stories that we have been told by people, institutions or media that we trust explicitly or implicitly. Because most of us are not authorities on anything, it would be impossible to overturn any conventional narrative with an evidentiary argument. Challenging these paradigms is necessarily left to others. Generally speaking, there is no real reason to argue with convention if everything is seemingly unfolding acceptably. But what if you wanted to know for yourself ? Is there any way to ever really know the truth without having to have faith in someone or something else?

There may not be. However, it is also naive to believe that if someone, scientist or not, was in possession of evidence that challenged our deepest held beliefs that it would take root in the ethos on its own. Galileo enjoyed unsurpassed credibility as one of Italy’s foremost mathematicians. He also possessed irrefutable, verifiable and reproducible evidence for his revolutionary theory, yet the convention he was challenging did not crumble through his discoveries. History has shown us that it makes no difference how valid a point is; truth emerges only when someone is listening

So, rather than seeking to independently validate or refute what we are being told, it becomes more productive to ask a different question: How biased is our society by historical standards? How does our society regard alternative theories? Do we let them co-exist with convention as the ancient Greeks did? Do we collectively invest resources to investigate them openly? Or do we dismiss, attack and vilify them as was done in the papal states in Galileo’s time? Which kind of society is more likely to get it right? Which runs the greater risk of being hoodwinked in the long run? Which is more free?

You Can Help Stop The 5G Infrastructure

We plan to investigate the telecom industry, it’s ties to politics, and expose its efforts to push 5G while ignoring the dangers and without proper safety testing, but we can't do it without your support.

We've launched a funding campaign to fuel our efforts on this matter as we are confident we can make a difference and have a strong plan to get it done.

Check out our plan and join our campaign here.

Continue Reading

Alternative News

US House of Representatives Investigating if the Government Created Lyme Disease As A Bioweapon

Published

on

In Brief

  • The Facts:

    A New Jersey lawmaker suggests the government turned ticks and insects into bioweapons to spread disease, and possibly released them. He is not the only one who believes so.

  • Reflect On:

    This is not the only example of supposed human experimentation on mass populations by the government

There are a number of subjects that were once considered ‘conspiracy theories,’ which are now no longer in that realm. ‘Conspiracy theories’ usually, in my opinion, arise from credible evidence. The implications, however, are so grand and so mind-altering that many may experience some sort of cognitive dissonance as a result. One of the topics often deemed a ‘conspiracy theory’ is weaponized diseases, and the latest example comes from an approved amendment that was proposed by a Republican congressman from New Jersey. His name is Chris Smith, and he instructed the Department of Defence’s Inspector General to conduct a review on whether or not the US “experimented with ticks and insects regarding use as a biological weapon between the years of 1950 and 1975” and “whether any ticks or insects used in such experiment were released outside of any laboratory by accident or experiment design.”

The fact that Smith brought this up shows that any intelligent person who actually looks into this has reason to believe it’s a possibility, yet mainstream media outlets are ridiculing the idea, calling it a conspiracy instead of actually bringing up the points that caused Smith to demand the review.

The fact that the amendment was approved by a vote in the House speaks volumes. Smith said that the amendment was inspired by “a number of books and articles suggesting that significant research had been done at US government facilities including Fort Detrick, Maryland, and Plum Island, New York, to turn ticks and insects into bioweapons”.

Most people don’t know that the US government has experimented on its own citizens a number of times. All of this is justified for “national security” purposes. National security has always been a term used as an excuse to prolong secrecy, justify the government’s lack of transparency, and create black budget programs that have absolutely no oversight from Congress.

For example, on September 20, 1950, a US Navy ship just off the coast of San Francisco used a giant hose to spray a cloud of microbes into the air and into the city’s famous fog. The military was apparently testing how a biological weapon attack would affect the 800,000 residents of the city.The people of San Francisco had absolutely no idea. The Navy continued the tests for seven days, and multiple people died as a result. It was apparently one of the first large-scale biological weapon trials that would be conducted under a “germ warfare testing program” that went on for 20 years from 1949 to 1969. The goal “was to deter [the use of biological weapons] against the United States and its allies and to retaliate if deterrence failed,” the government later explained. Then again, that’s if you trust the explanation coming from the government.

This could fall under the category of human subject research. It’s still happening! A dozen classified programs that involved research on human subjects were underway last year at the Department of Energy. Human subject research refers broadly to the collection of scientific data from human subjects. This could involve performing physical procedures on the subjects or simply conducting interviews and having other forms of interaction with them. It could even involve procedures performed on entire populations, apparently without their consent.

advertisement - learn more

Human subjects research erupted into national controversy 25 years ago with reporting by Eileen Welsome of the Albuquerque Tribune on human radiation experiments that had been conducted by the Atomic Energy Commission, many of which were performed without the consent of the subjects. A presidential advisory committee was convened to document the record and to recommend appropriate policy responses.

When it comes to Lyme disease, the Guardian points out that:

A new book published in May by a Stanford University science writer and former Lyme sufferer, Kris Newby, has raised questions about the origins of the disease, which affects 400,000 Americans each year.

Bitten: The Secret History of Lyme Disease and Biological Weapons, cites the Swiss-born discoverer of the Lyme pathogen, Willy Burgdorfer, as saying that the Lyme epidemic was a military experiment that had gone wrong.

Burgdorfer, who died in 2014, worked as a bioweapons researcher for the US military and said he was tasked with breeding fleas, ticks, mosquitoes and other blood-sucking insects, and infecting them with pathogens that cause human diseases.

According to the book, there were programs to drop “weaponised” ticks and other bugs from the air, and that uninfected bugs were released in residential areas in the US to trace how they spread. It suggests that such a scheme could have gone awry and led to the eruption of Lyme disease in the US in the 1960s.

This is concerning. It’s a story that, for some reason, instantly reminded me of the MK ultra program, where human subjects were used for mind control research.

If things like this occurred in the past, it’s hard to understand why someone would deem the possibility of this happening again a ‘conspiracy theory.’ What makes one think this wouldn’t be happening again, especially given the fact that there is sufficient evidence suggesting it is?

Lyme disease is also very strange. If you did get it, you probably wouldn’t know immediately – unless you’re one of the chronic sufferers that have had to visit over 30 doctors to get a proper diagnosis. Lyme disease tests are highly inaccurateoften inconclusive or indicating false negatives.

Why? Because this clever bacteria has found a way to dumb down the immune system and white blood cells so that it’s not detectable until treatment is initiated. To diagnose Lyme disease properly you must see a “Lyme Literate MD (LLMD).” However, more and more doctors are turning their backs on patients due to sheer fear of losing their practices! Insurance companies and the CDC will do whatever it takes to stop Chronic Lyme Disease from being diagnosed, treated, or widely recognized as an increasingly common issue.

You can read more about that here.

The Takeaway

It’s becoming more apparent that our government as well as our federal health regulatory agencies are extremely corrupt. There are a number of examples to choose from throughout history proving this. The fact that something like this doesn’t seem believable to the public is ridiculous and further enhances and prolongs the ability for the powerful elite and the government to continue conducting these activities. Awareness is key.

You Can Help Stop The 5G Infrastructure

We plan to investigate the telecom industry, it’s ties to politics, and expose its efforts to push 5G while ignoring the dangers and without proper safety testing, but we can't do it without your support.

We've launched a funding campaign to fuel our efforts on this matter as we are confident we can make a difference and have a strong plan to get it done.

Check out our plan and join our campaign here.

Continue Reading

Alternative News

The Medical Journals’ Sell-Out—Getting Paid to Play

Published

on

[Note: This is Part IX in a series of articles adapted from the second Children’s Health Defense eBook: Conflicts of Interest Undermine Children’s Health. The first eBook, The Sickest Generation: The Facts Behind the Children’s Health Crisis and Why It Needs to End, described how children’s health began to worsen dramatically in the late 1980s following fateful changes in the childhood vaccine schedule.]

The vaccine industry and its government and scientific partners routinely block meaningful science and fabricate misleading studies about vaccines. They could not do so, however, without having enticed medical journals into a mutually beneficial bargain. Pharmaceutical companies supply journals with needed income, and in return, journals play a key role in suppressing studies that raise critical questions about vaccine risks—which would endanger profits.

Journals are willing to accept even the most highly misleading advertisements. The FDA has flagged numerous instances of advertising violations, including ads that overstated a drug’s effectiveness or minimized its risks.

An exclusive and dependent relationship

Advertising is one of the most obviously beneficial ways that medical journals’ “exclusive and dependent relationship” with the pharmaceutical industry plays out. According to a 2006 analysis in PLOS Medicinedrugs and medical devices are the only products for which medical journals accept advertisements. Studies show that journal advertising generates “the highest return on investment of all promotional strategies employed by pharmaceutical companies.” The pharmaceutical industry puts a particularly “high value on advertising its products in print journals” because journals reach doctors—the “gatekeeper between drug companies and patients.” Almost nine in ten drug advertising dollars are directed at physicians.

In the U.S. in 2012, drug companies spent $24 billion marketing to physicians, with only $3 billion spent on direct-to-consumer advertising. By 2015, however, consumer-targeted advertising had jumped to $5.2 billion, a 60% increase that has reaped bountiful rewards. In 2015, Pfizer’s Prevnar-13 vaccine was the nation’s eighth most heavily advertised drug; after the launch of the intensive advertising campaign, Prevnar “awareness” increased by over 1,500% in eight months, and “44% of targeted consumers were talking to their physicians about getting vaccinated specifically with Prevnar.” Slick ad campaigns have also helped boost uptake of “unpopular” vaccines like Gardasil.

Advertising is such an established part of journals’ modus operandi that high-end journals such as The New England Journal of Medicine (NEJM) boldly invite medical marketers to “make NEJM the cornerstone of their advertising programs,” promising “no greater assurance that your ad will be seen, read, and acted upon.” In addition, medical journals benefit from pharmaceutical companies’ bulk purchases of thousands of journal reprints and industry’s sponsorship of journal subscriptions and journal supplements.

advertisement - learn more

In 2003, an editor at The BMJ wrote about the numerous ways in which drug company advertising can bias medical journals (and the practice of medicine)—all of which still hold true today. For example:

  • Advertising monies enable prestigious journals to get thousands of copies into doctors’ hands for free, which “almost certainly” goes on to affect prescribing.
  • Journals are willing to accept even the most highly misleading advertisements. The FDA has flagged numerous instances of advertising violations, including ads that overstated a drug’s effectiveness or minimized its risks.
  • Journals will guarantee favorable editorial mentions of a product in order to earn a company’s advertising dollars.
  • Journals can earn substantial fees for publishing supplements even when they are written by “paid industry hacks”—and the more favorable the supplement content is to the company that is funding it, the bigger the profit for the journal.

Discussing clinical trials, the BMJ editor added: “Major trials are very good for journals in that doctors around the world want to see them and so are more likely to subscribe to journals that publish them. Such trials also create lots of publicity, and journals like publicity. Finally, companies purchase large numbers of reprints of these trials…and the profit margin to the publisher is huge. These reprints are then used to market the drugs to doctors, and the journal’s name on the reprint is a vital part of that sell.”

… however, even these poor-quality studies—when funded by the pharmaceutical industry—got far more attention than equivalent studies not funded by industry.

Industry-funded bias

According to the Journal of the American Medical Association (JAMA), nearly three-fourths of all funding for clinical trials in the U.S.—presumably including vaccine trials—came from corporate sponsors as of the early 2000s. The pharmaceutical industry’s funding of studies (and investigators) is a factor that helps determine which studies get published, and where. As a Johns Hopkins University researcher has acknowledged, funding can lead to bias—and while the potential exists for governmental or departmental funding to produce bias, “the worst source of bias is industry-funded.”

In 2009, researchers published a systematic review of several hundred influenza vaccine trials. Noting “growing doubts about the validity of the scientific evidence underpinning [influenza vaccine] policy recommendations,” the authors showed that the vaccine-favorable studies were “of significantly lower methodological quality”; however, even these poor-quality studies—when funded by the pharmaceutical industry—got far more attention than equivalent studies not funded by industry. The authors commented:

[Studies] sponsored by industry had greater visibility as they were more likely to be published by high impact factor journals and were likely to be given higher prominence by the international scientific and lay media, despite their apparent equivalent methodological quality and size compared with studies with other funders.

In their discussion, the authors also described how the industry’s vast resources enable lavish and strategic dissemination of favorable results. For example, companies often distribute “expensively bound” abstracts and reprints (translated into various languages) to “decision makers, their advisors, and local researchers,” while also systematically plugging their studies at symposia and conferences.

The World Health Organization’s standards describe reporting of clinical trial results as a “scientific, ethical, and moral responsibility.” However, it appears that as many as half of all clinical trial results go unreported—particularly when their results are negative. A European official involved in drug assessment has described the problem as “widespread,” citing as an example GSK’s suppression of results from four clinical trials for an anti-anxiety drug when those results showed a possible increased risk of suicide in children and adolescents. Experts warn that “unreported studies leave an incomplete and potentially misleading picture of the risks and benefits of treatments.”

Many vaccine studies flagrantly illustrate biases and selective reporting that produce skewed write-ups that are more marketing than science.

Debased and biased results

The “significant association between funding sources and pro-industry conclusions” can play out in many different ways, notably through methodological bias and debasement of study designs and analytic strategies. Bias may be present in the form of inadequate sample sizes, short follow-up periods, inappropriate placebos or comparisons, use of improper surrogate endpoints, unsuitable statistical analyses or “misleading presentation of data.”

Occasionally, high-level journal insiders blow the whistle on the corruption of published science. In a widely circulated quote, Dr. Marcia Angell, former editor-in-chief of NEJM, acknowledged that “It is simply no longer possible to believe much of the clinical research that is published, or to rely on the judgment of trusted physicians or authoritative medical guidelines.” Dr. Angell added that she “[took] no pleasure in this conclusion, which [she] reached slowly and reluctantly” over two decades at the prestigious journal.

Many vaccine studies flagrantly illustrate biases and selective reporting that produce skewed write-ups that are more marketing than science. In formulaic articles that medical journals are only too happy to publish, the conclusion is almost always the same, no matter the vaccine: “We did not identify any new or unexpected safety concerns.” As an example of the use of inappropriate statistical techniques to exaggerate vaccine benefits, an influenza vaccine study reported a “69% efficacy rate” even though the vaccine failed “nearly all who [took] it.” As explained by Dr. David Brownstein, the study’s authors used a technique called relative risk analysis to derive their 69% statistic because it can make “a poorly performing drug or therapy look better than it actually is.” However, the absolute risk difference between the vaccine and the placebo group was 2.27%, meaning that the vaccine “was nearly 98% ineffective in preventing the flu.”

… the reviewers had done an incomplete job and had ignored important evidence of bias.

Trusted evidence?

In 2018, the Cochrane Collaboration—which bills its systematic reviews as the international gold standard for high-quality, “trusted” evidence—furnished conclusions about the human papillomavirus (HPV) vaccine that clearly signaled industry bias. In May of that year, Cochrane’s highly favorable review improbably declared the vaccine to have no increased risk of serious adverse effects and judged deaths observed in HPV studies “not to be related to the vaccine.” Cochrane claims to be free of conflicts of interest, but its roster of funders includes national governmental bodies and international organizations pushing for HPV vaccine mandates as well as the Bill & Melinda Gates Foundation and the Robert Wood Johnson Foundation—both of which are staunch funders and supporters of HPV vaccination. The Robert Wood Johnson Foundation’s president is a former top CDC official who served as acting CDC director during the H1N1 “false pandemic” in 2009 that ensured millions in windfall profits for vaccine manufacturers.

Two months after publication of Cochrane’s HPV review, researchers affiliated with the Nordic Cochrane Centre (one of Cochrane’s member centers) published an exhaustive critique, declaring that the reviewers had done an incomplete job and had “ignored important evidence of bias.” The critics itemized numerous methodological and ethical missteps on the part of the Cochrane reviewers, including failure to count nearly half of the eligible HPV vaccine trials, incomplete assessment of serious and systemic adverse events and failure to note that many of the reviewed studies were industry-funded. They also upbraided the Cochrane reviewers for not paying attention to key design flaws in the original clinical trials, including the failure to use true placebos and the use of surrogate outcomes for cervical cancer.

In response to the criticisms, the editor-in-chief of the Cochrane Library initially stated that a team of editors would investigate the claims “as a matter of urgency.” Instead, however, Cochrane’s Governing Board quickly expelled one of the critique’s authors, Danish physician-researcher Peter Gøtzsche, who helped found Cochrane and was the head of the Nordic Cochrane Centre. Gøtzsche has been a vocal critic of Cochrane’s “increasingly commercial business model,” which he suggests is resulting in “stronger and stronger resistance to say anything that could bother pharmaceutical industry interests.” Adding insult to injury, Gøtzsche’s direct employer, the Rigshospitalet hospital in Denmark, then fired Gøtzsche. In response, Dr. Gøtzsche stated, “Firing me sends the unfortunate signal that if your research results are inconvenient and cause public turmoil, or threaten the pharmaceutical industry’s earnings, …you will be sacked.” In March 2019, Gøtzsche launched an independent Institute for Scientific Freedom.

In 2019, the editor-in-chief and research editor of BMJ Evidence Based Medicine—the journal that published the critique of Cochrane’s biased review—jointly defended the critique as having “provoke[d] healthy debate and pose[d] important questions,” affirming the value of publishing articles that “hold organisations to account.” They added that “Academic freedom means communicating ideas, facts and criticism without being censored, targeted or reprimanded” and urged publishers not to “shrink from offering criticisms that may be considered inconvenient.”

In recent years, a number of journals have invented bogus excuses to withdraw or retract articles critical of risky vaccine ingredients, even when written by top international scientists.

The censorship tsunami

Another favored tactic is to keep vaccine-critical studies out of medical journals altogether, either by refusing to publish them (even if peer reviewers recommend their publication) or by concocting excuses to pull articles after publication. In recent years, a number of journals have invented bogus excuses to withdraw or retract articles critical of risky vaccine ingredients, even when written by top international scientists. To cite just three examples:

  • The journal Vaccine withdrew a study that questioned the safety of the aluminum adjuvantused in Gardasil.
  • The journal Science and Engineering Ethics retracted an article that made a case for greater transparency regarding the link between mercury and autism.
  • Pharmacological Research withdrew a published veterinary article that implicated aluminum-containing vaccines in a mystery illness decimating sheep, citing “concerns” from an anonymous reader.

Elsevier, which publishes two of these journals, has a track record of setting up fake journals to market Merck’s drugs, and Springer, which publishes the third journal as well as influential publications like Nature and Scientific American, has been only too willing to accommodate censorship requests. However, even these forms of censorship may soon seem quaint in comparison to the censorship of vaccine-critical information now being implemented across social media and other platforms. This concerted campaign to prevent dissemination of vaccine content that does not toe the party line will make it harder than ever for American families to do their due diligence with regard to vaccine risks and benefits.


Sign up for free news and updates from Robert F. Kennedy, Jr. and the Children’s Health Defense. CHD is planning many strategies, including legal, in an effort to defend the health of our children and obtain justice for those already injured. Your support is essential to CHD’s successful mission.

You Can Help Stop The 5G Infrastructure

We plan to investigate the telecom industry, it’s ties to politics, and expose its efforts to push 5G while ignoring the dangers and without proper safety testing, but we can't do it without your support.

We've launched a funding campaign to fuel our efforts on this matter as we are confident we can make a difference and have a strong plan to get it done.

Check out our plan and join our campaign here.

Continue Reading
advertisement - learn more
advertisement - learn more

Video

Pod