The Vaccine Blog

youmevaccines@gmail.com

How should vaccine misinformation be penalized?

Should vaccine misinformation online be penalised?



According to an article published on  theconversation.com “it was fake news that led to the “pizzagate” scandal in 2016, for example. This involved unsubstantiated accusations of child abuse against prominent individuals linked to a Washington DC pizzeria. Last year, the brand Target was falsely accused of selling “satanic” children’s clothes on social media.The consequences of direct misinformation can be far reaching, leading to a breakdown in brand trust. This erosion is particularly pronounced when misinformation originates from seemingly trustworthy sources, forcing brands into crisis management mode.”

 

This is unfortunately far from the only example; 

 

Donald Trump derided any critical news coverage as “fake news” and his unwillingness to concede the 2020 presidential election eventually led to the January 6, 2021 riot at the US Capitol.

For years, radio host Alex Jones denounced the parents of children slaughtered in the Sandy Hook school shooting in Newton, Connecticut as “crisis actors”. On August 5, 2022 he was ordered by a jury to pay more than US$49 million in damages to two families for defamation.



What this demonstrates is; sharing misinformation can have a large scale impact on the lives of people it is shared with. And sometimes, that impact can`t be reversed; or can be reversed only to a certain degree. As such, it can create lifelong issues for either individuals or societies. Sometimes the effect will be seen immediately or it may take many years or even decades for the impact of misinformation and disinformation campaigns to be seen. On an individual level some may see it sooner; some may see it later. It's also crucial to note that misinformation often disproportionately affects certain minority groups such as ethnic minorities and women, as misinformation and disinformation campaigns are often targeted towards certain groups. This means that very often tailored approaches are required to address misinformation and disinformation campaigns. And the more sophisticated the campaign; the more novel the approach to address it needs to be. For these reasons; it's worth exploring misinformation and penalties that could or should be applied to it.

 

First, let's consider a situation where there is accidental sharing of misinformation. Perhaps it was an individual who didn't have a background in the relevant area and accidentally shared it with several friends. Possibly it was a case where an information- dense article was shared with incomplete or inaccurate facts. Perhaps the majority of the information was correct and a single fact (or several facts)  was/were  incorrect, but it didn't detract from the validity of the article because all the main principles were accurate. There are endless examples of how this may happen. In this case, there isn't any real justification for punishment. I think the vast majority of people would agree with me here. 

 

Ok, so let's consider something different. Let`s also consider a scenario where a well respected media outlet with a significant following shared incorrect information. Maybe it was intentional, or maybe it wasn`t. For the purposes of this, let's say it wasn't intentional. Something simply slipped through the cracks in their fact checking processes, and now it's been published and disseminated around the world. It happens. Maybe there's someone new and inexperienced on the fact checking team and something was missed. Of course this could still happen with more experienced individuals however it's less likely to occur.

 

Regardless; obviously this breach is a little more serious than the previous scenarios I have mentioned. This has the potential to have a global impact. Or at the very least it could cause large-scale harm. So, obviously a more severe penalty in the form of a larger fine, or maybe other consequences to the company privileges. Also the majority of companies do have procedures for reviewing breaches like these, feeding them back to the relevant employee or employees. They then generally put policies in place to reduce the risk of this happening again. Once this is done (and perhaps an apology given to anyone affected), it can generally be concluded that there was no malicious intent behind the incorrect information sharing. When accountability is taken in this way, be it either an individual or a company, that generally signals good intent. 

 

Intent

 

I`m going to draw your attention to that word for a moment. There's one common element among these first scenarios; intention. I`ve said this before but it really does bear repeating; intention really matters. And in both these cases, there wasn't an intention to deceive. They were both honest mistakes, and the penalties I suggested really should reflect that. Lack of knowledge can be forgiven if the individual is open to correcting the information and updating their worldview. Wilful ignorance, which can occur for many reasons, is much more difficult to forgive. It means that there was  intent to deceive and possibly harm. Or at the very least, that there was an indifference to the accuracy of the information. In the context of health misinformation, it can signal indifference to the welfare of others too.  So why does intent matter so much?In my view, the reason is that what someone's intentions are reflects what their value systems are; which tells you a huge amount about them. When you understand this, future behaviours are likely to be the same.This idea of intention and its consequences is what  leads me to my next point.

 

So let's take a scenario where there is a deliberate attempt to deceive - this is generally classed as disinformation. Maybe a politician spreads propaganda to further their political agenda; a company spreads misinformation to increase profits, or a media agency spreading misinformation to get more views, clicks, likes and so on. There are literally endless examples of this everywhere online, and arguably offline too. Something else I'd add is that the internet didn`t create the idea of false information being spread by those with ulterior motives. It just made the phenomenon more widespread and therefore made the general public more aware of it. That said, it wasn`t called “misinformation” - you might be familiar with the term propaganda. An article published on theconversation.com describes how “When the United States declared war on Germany 100 years ago, the impact on the news business was swift and dramatic.In its crusade to “make the world safe for democracy,” the Wilson administration took immediate steps at home to curtail one of the pillars of democracy – press freedom – by implementing a plan to control, manipulate and censor all news coverage, on a scale never seen in U.S. history.Following the lead of the Germans and British, Wilson elevated propaganda and censorship to strategic elements of all-out war. Even before the U.S. entered the war, Wilson had expressed the expectation that his fellow Americans would show what he considered “loyalty.” The point being that 

 

Let's say an entire inaccurate article was widely shared. That's much more difficult to rectify. Time and effort evidently went into ensuring that the article seemed legitimate. Nobody misses that much incorrect information. You simply have to be ignoring it. And yes, sure, many people will realise. We all know how hostile people can be in defending themselves under the armour of online anonymity. I`m not saying the disinformation (it's intentional this time, remember) would go blindly unnoticed by the audience. Far from it. However, I`m not referring to those. I`m referring to those who read about a topic like vaccines and feel a familiar pang of dread every time they talk about the topic. They may feel this way because they`re struggling to make a decision; or have already made a decision and are uncertain - many variables in life can change very quickly and alter our perception of ourselves and by extension our decisions. So take those people. Multiply that by hundreds of millions. That's the scale of the issue this can cause. That's the reality of what we are facing.  

 

So, pick any example  you want and think back to the title of the post; should this be punished? How and why? Again, there's a deliberate attempt to deceive here. There can be very serious breaches of trust in these cases, and sometimes even legal breaches - this can apply either online or in real life. For these reasons among more, in cases where there is deliberate attempt to deceive I`d argue that any potential consequences need to be considered very carefully. That enables the best solutions to be found for preventing misinformation sharing in the future. So to find good solutions; it's important to ask good questions; which is exactly what I`m going to do. 

 

Was there only one incorrect fact, or was the entire written piece inaccurate? That matters because a fact wrong vs an entire incorrect article are different. A single fact incorrect may be forgotten, skimmed over, or can be edited relatively quickly and easily. No harm done. Or, at least, the harm is minimised. 



Also, here's something further to consider. Which fact was wrong? Context matters, as with everything. Was it a fact central to the principal you are discussing? Will this leave gaps in readers`understanding of other important principles and in this way, have a knock-on effect? Remember the butterfly-hurricane analogy! The website statistics quotes it very eloquently as “Don't underestimate the flutter of a butterfly's wings; it can create a hurricane across the world.” “The butterfly effect encapsulates the idea that every action is linked, creating a web of cause and effect. It's important to apply this principle in the context of misinformation; because one negative post can cause ripples around the world.



 

 

             

So, what to do?

 

Shouldn't this be regulated, at least to some degree or another?

 

I think we`ll all agree that it should be.




That said, it's important not to think of this in a binary, black-and-white way ie. misinformation should/shouldn't be regulated online. There ought to be different degrees of regulation for different types and amounts of misinformation, as I've discussed above. Taking a blanket approach can really lead to certain groups of people being alienated; such as those who are simply raising concerns or discussing rumours (which are unverified pieces of information, so can`t be classified as misinformation). Although I genuinely appreciate that actions should have consequences and should be taken seriously; cancel culture really perpetuates this idea of blanket punishment. There is little to no consideration of context, degree of harm, or intention. If an individual shares something people don't like; they`re shunned and banned. That's it. 

 

Which only reinforces the original issue. If a person or certain groups of people feel ostracised, they're going to be more likely to share misinformation. They may progress from misinformation to more extremist disinformation and misinformation; or even extremist behaviour in person. Further, people operate in groups and in social networks; so one bad apple can spoil the bunch. So it`s for that reason that looking at this on a more granular level is important. 

 

Here's what the key takeaway is; the punishment must be proportional to the crime. A teenager who shoplifted alcohol isn't going to be jailed for 20 years. They may certainly be fined, counselled, forced to partake in community service, etc. Manslaughter, on the other hand, may result in a life sentence in prison, and/or a substantial fine. Obviously, these aren't the same; it's simply an analogy to demonstrate the principle of a punishment needing to fit the crime. 

 

This makes complete sense in a real-world context. Of course; why should anyone receive a disproportionately severe penalty for a minor crime; or, for that matter,  a lacklustre penalty for a severe crime. If this happened; people would be up in arms! There would be protests, speeches, and so on.

 

So here's my question..

 

Why do we allow it online?

 

Staying with the legal analogy, even the statement “innocent until proven guilty”  is often not applied in an online context. Specifically in the context of vaccination misinformation; it's often the case that any and all queries are immediately dismissed as misinformation before understanding the context behind why it was shared. There is often a very limited amount of effort, energy, and  time spent on trying to take a holistic, bigger-picture approach.  And this is fine; I mean we have a million stimuli competing for our attention every moment of every day.

 

 It's OK, essential even, to use some cognitive heuristics (mental shortcuts) to be more selective about what we are spending time, effort, and energy on. That's fine. .However, what we cannot then do is wonder why vaccine hesitancy is still such a significant problem, and when it continues to be a significant problem in future epidemics and pandemics. The reason has been right there in front of us the entire time, waiting to be addressed. 

 

That really matters because understanding can help diffuse emotion. It can also improve communication quality. Both of these; among many others, will factor into achieving better solutions. 

 

However, there's a reason I mention communication specifically, and why I emphasise it so much in the ethos of the website. So, it's certainly true that vaccine hesitancy (like any social issue) is complex and multifactorial. I say this a lot, but it's really worth repeating. It's A framework of interlinked issues. So it isn't going to be a case of a catch-all approach where we outright ban dissenting views. That can impact freedom of speech. However, a common denominator in all these issues is communication - or, more accurately, a lack of it. Also, there's a lack of understanding how and why people communicate the way they do. Especially when it comes to emotionally charged topics like vaccines and vaccine hesitancy. So hopefully I can shed some light on that in the next section.

 

The first element is; what is their background? Did they grow up in a natural living community? Maybe they married someone from this community,raised their kids within these communities, and formed genuinely fulfilling personal and professional relationships within this community. They may have made these irrevocable decisions as a result of these beliefs; and it's very difficult to turn back once you`ve established yourself within a community that much. Also, nobody gets to choose their parents. 

 

Is it a political reason? Again, the same principle applies here. We`re all raised with certain values and in a certain environment. These, which were out of control, shape our belief systems. And these are only the ones we are aware of - there are many more variables we are not aware of that dictate what beliefs and values we will have throughout our lives. These are going to shape our political views and affiliations (or, indeed, lack of). So for this reason, beliefs, value systems and how they shape political affiliations are key to consider when discussing difficult issues.    



Conclusion

 

Penalty should be proportionate to the crime committed, and the "innocent until proven guilty" should always apply. Thanks for reading

 

For access to other insightful articles and other benefits; do consider becoming a paying subscriber on substack

 

  1. How subtle forms of misinformation effect what we buy and how much we trust brands
  2. How Woodrow Wilson’s propaganda machine changed American journalism
  3. Three reasons why disinformation is so pervasive and what we can do about it