52 Weeks of Data Pattern Analysis: Week 3: A Paradigm Conundrum

This week I wanted to provide a deeper exploration into the concept of paradigms.  Recall that in Week 1, I wrote about the Age Crime Curve and Health Cost Curve and I indicated that I had found the reasons that these curves are shaped the way they are.  After several failed attempts to explain these curves, I wanted to try again with 52 weeks of data pattern analysis. 

In week 2, I tried to go back to the beginning, where I started to develop the solution to these curves.  I provided an overview of the concept of “paradigms” and went over some discoveries that I had made while working on my master’s thesis.  The short version of the story regarding my master’s thesis was that the results of my calculations had created a “paradigm conundrum.”

I Have a Plan!

I recognize that this is going deeper into the weeds of criminal justice research than most of you probably want to go.  There is a point however, and if you can bear with me, I will take you there.  I have a plan.

The problem with understanding the age crime curve and the health cost curve is that our scientific paradigms are flawed and we must start thinking differently.  The solution to these curves is about 5 paradigm shifts deep.  I know how to navigate these shifts because they are in a decade’s worth of research I did and never published.

I am claiming that we have the following situation where there are multiple paradigm shifts required.

  1. Paradigm 1: Shift your thinking
  2. Paradigm 2: Shift your thinking again
  3. Paradigm 3: Shift your thinking again
  4. Paradigm 4: Shift your thinking again
  5. Paradigm 5: Shift your thinking again

If I try to discuss paradigm shift 5, I doubt that anyone will understand what I am writing because I have glossed over a connected series of four previous unpublished discoveries that each involved a separate major paradigm shift. 

Paradigm shift 1 is hard enough to understand.  In paradigm shift 1, I will be discussing how we think about traits like the propensity for crime or the propensity for health.  There is an almost universal misunderstanding of the nature of traits.  People tend to think of traits as stable, but traits are highly dynamic. Understanding the nature of traits is essential for anyone who is interested in changing themselves, or facilitating change in others.

A Paradigm Conundrum

After I did my master’s thesis, I was facing a paradigm conundrum.  The paradigm conundrum was created because my analyses created questions about some existing criminal justice paradigms.  These paradigms have been driving criminal justice research on the concept of offending risk since the early 1970s.  Probably no one outside of criminal justice has heard about these issues, so I probably should provide a basic explanation of the issues. This will help you understand the context of what follows.

In the 1970s, we had a crime bump in the US and several Western nations that was largely due to the baby boom generation having lots of children in the 1950s. These children all reached their peak offending ages according to the age crime curve in the 1960s and 1970s.  Note that some will argue that this crime bump is not due to demographic shifts in population age intersecting with the age crime curve, but I have some analyses that support my claims.  More paradigms shifts are needed before I can cover that research and I will cover those paradigm shifts as well.

In 1974, Robert Martinson wrote a critique of criminal justice research called “What works?—questions and answers about prison reform.”  This critique was essentially a critique of the lack of rigor in the research methods being used in criminal justice.  Because of the lack of rigor, he argued that we really were not sure if treatment was effective.  However, rather than focusing on his critique regarding research quality, his message was construed as “nothing works” in criminal offender rehabilitation.

You can read a little about Robert Martinson’s work on Wikipedia.

https://en.wikipedia.org/wiki/Robert_Martinson

If you do a Google search for “robert martinson what works” without the quotes, you can read more about the fuss that was caused by the Robert Martinson article.

The combination of the crime bump, the Martinson article, and some other things that happened during the 1970s and 1980s lead to a decision by policy makers to dramatically curtail efforts to rehabilitate prisoners.  There was a “lock em up and throw away the key” mentality at the time.  Rehabilitation efforts were largely eliminated and replaced with long term fixed sentences.  Our prison populations exploded from about 100 prisoners per 100,000 people in the US population to up around 700 prisoners incarcerated per 100,000.

The What Works Paradigm

In response to the Martinson article and other events at the time, an effort was made to revive the offender rehabilitation paradigm.  The concept of “what works” was developed. The effort to find the things that worked to rehabilitate offenders was promoted in part by Don Andrews and James Bonta in Canada in the 1970s and early 1980s.  Sadly, Don Andrews has passed away.  Dr. James Bonta is on LinkedIn, so I will tag him on this. 

In response to the perception in the 1970s that criminal risk can’t be changed, Andrews and Bonta had suggested that not enough attention was devoted to discovering “what works” to reduce recidivism risk.  They argued that with the proper treatment, criminal recidivism risk can be changed for the better.  They wrote numerous articles and several books about the topic of what works in offender rehabilitation. 

The three pillars of efforts by Andrews and Bonta to promote what works were to focus on 1) static recidivism risk, 2) dynamic treatment needs, and 3) responsivity to treatment considerations.  In order to quantify static recidivism risk and dynamic treatment needs, Andrews and Bonta developed a “dynamic risk assessment instrument” called the “Level of Service Inventory-Revised” (LSI-R).  They suggested that the LSI-R could measure levels of static offender risk and changes in dynamic treatment needs.

The Dynamic Predictive Validity Test

If you recall, my master’s thesis involved testing the “dynamic predictive validity” of a criminal offender risk assessment.  The instrument scores I was testing were generated with the LSI-R, and “dynamic predictive validity” was a psychometric test that had been invented by Andrews and Bonta in the 1980s and 1990s. 

The basic premise behind the dynamic predictive validity thesis was as follows.

  1. Criminal recidivism risk is dynamic (it is changing over time).
  2. The LSI-R is a “dynamic” risk assessment instrument capable of measuring changes in recidivism risk.
  3. Prediction accuracy improves from the first to the second assessment with the LSI-R.
  4. The improvement in the LSI-R accuracy from the first to second assessment is because 1) offender recidivism risk changed between the first and second assessments with the LSI-R and 2) the LSI-R detected the changes in recidivism risk and that was why the second LSI-R assessment score was more accurate.

The Paradigm Conundrum

The results of my master’s thesis created a paradigm conundrum.  First, my results replicated all of the previous research. Prediction accuracy improved from the first to second assessment with the LSI-R. 

However, since prediction did not improve from the second to third assessment or from the third to the fourth assessment, it appeared that some other mechanism was at work. Whey would prediction only improve on the first two assessments.

I had several possible explanations for my findings.

  1. The work in my master’s thesis was flawed.
  2. The dynamic predictive validity thesis was flawed. It was possible that the LSI-R could not measure change and something else was causing the improvement in prediction accuracy for the LSI-R from the first to the second assessment.
  3. Offender risk was not changing significantly.

Regarding number 1, there were all sorts of possible problems with my thesis.  The samples were getting smaller.  Perhaps something happened with the smaller samples.  However, my smaller samples were bigger than those that had been used in previous research. Something was wrong, but it was not clear whether it was my data or methods. My methods seemed to be exactly like those used in the previous research.

Number 2 was also a possibility.  Andrews had pointed out that there was another reason that the LSI-R could become more accurate between the first and second assessment.  He had indicated that rater accuracy could be improving between assessments because the rater was getting to know the offender better after the first assessment.  This was a distinct possibility that would explain my results.  If the rater had already spent 6 months working with the offender between assessment one and two, the rater might have hit the top of the learning curve and not get much better on the third and fourth assessments.  How would one determine if rater improvement was causing the results?

Number 3 seemed to be unlikely since there were changes in the LSI-R scores between assessments.  However, were the changes in score big enough to produce measurable changes in offending rates?  Were the changes in score insignificant?  How does one tell if a risk score change is big enough to be significant?

The Solution is Coming!

I will try to explain how I went through each of these points in a step by step fashion to resolve this paradigm conundrum.  The process I went through involved a rigorous step by step analyses with hundreds of different tests.  I did not know how to do these analyses, and I seemed to be in uncharted territory, so I invented a process. 

The process I developed has direct implications for our understanding of human traits.

I promise to skip some of the boring parts and stick to the parts that you should care about.  More to come …

Posted by Thomas Arnold