Scientific Thinking: Validity, Debate, and Politics?

There are a great many topics that people disagree on, whether the cliché: politics, religion, ethics, or the common: "This is the best way to do such and such.", "This is what's right"...

 

Regardless of what it is, people are generally pretty terrible at accepting and applying logic, or even the scientific method to their beliefs.

 

To an extent, this is acceptable, natural, and fine. The book "The Righteous Mind Why Good People Are Divided By Politics and Religion" by Jonathan Haidt does a great job of going into depth of the psychology of these disagreements.

 

On the discussion of logic though I've found there are three common somewhat intellectual responses used to simply disregard and dismiss what another person has said, especially when they do so under the veneer of being intellectual. This is ignoring all of the unintellectual responses such as calling each other names, ignoring what was said, etc...


The three somewhat intellectual responses are:

  1.  Claim it has a bad methodology
  2.  Claim the data is bad
  3.  Claim the argument is overly simple OR there is more nuance to the situation

 

All of these responses can be valid, but there are bounds to when these are valid versus invalid.

Methodology is often subject and domain specific. Beyond domain specific complaints there are some general questions that help with determining whether the methodology and data is sufficient.

 

  • Is the sample size large?
  • Are there multiple samples?
  • Does the data come from multiple sources to account for any systematic error or unintended bias?
  • Are other confounding factors and variables controlled for?
  • What sort of experiment is this? 
    • Does this have an independent and dependent variable? OR 
    • Is this a natural experiment? 
    • Is this an observational/regression model? (more is needed to declare a causal link)

     -Remember this: "Association is not causation."

  • Have the results been replicated using the same methodology?
  • Can the results be replicated using a different dataset?
  • Can the results be replicated using a different methodology?
  • Can the result be invalidated? How? Was this tested?
    • If the paper states a conclusion, without being able to control specifically for that effect, or the effect cannot be falsified then it is not a scientific result. (Two common examples of this is when they state "It must be God" OR equally common but often uncriticized is "It must be inherent to the system".) You see it generally when you apply a popular explanation with little or no justification. Such explanations may or may not actually be correct, but they are generally based on logically unsound reasoning.

  • Are there logical fallacies? (Can it be reformulated to still stand despite a few logical problems?)
  • Are alternative explanations examined, or stated for future investigation?
  • What was actually examined? What are the limits of generalizing outside of what was investigated?  (For example, applies to only certain geographical regions)
  • Has it been peer-reviewed? (This one helps, but is more of a shortcut for assessing the above)

(These next two are often domain specific, but when possible these are excellent tools that can apply across disciplines)

  • * Is it double blinded?
  • * Is it placebo controlled?

If it does well on many or most of these things it is likely acceptable science and acceptable as premises for someone's argument. Likewise, if it does not do well on many of these things it is valid to criticize and be skeptical of.



The sociology paper, despite its strong title, "Fuck Nuance" by Kieran Healy does an excellent job of discussing the validity and invalidity of nuance and several commonly misused rebuttals within the social sciences.

https://doi.org/10.1177%2F0735275117709046

In short, overly nuanced models generally fail. A great field that is aware of this is the machine learning community. Too much nuance, or a complicated model, can be too rigid and inflexible. The model overfits the data and fails to generalize well to real-world examples.

 You might even say an overly nuanced model is a chance to apply Occam's Razor...

Overfitting the data in my experience seems to generally be more common than the situation where an argument/model doesn't have enough nuance. Too little nuance is roughly equivalent to underfitting the data (e.g. you draw a line when you need a curve).

 

From: BinaryCoders


Lastly on the topic of nuance, if the disagreement of nuance is about a value assessment (Such as "I'm fighting for Truth and Justice! You're Not!), it's not the model or nuance that has the problem, its a disagreement in premises or paradigm.

Comments

Popular posts from this blog

Fail2ban Rules for Foundry VTT

Fail2ban Behind Reverse Proxy

Leadership Training 2