Skip to main content

Dr. Timothy Shahan's Featured Publication Spotlight:

Gallistel, C. R., Craig, A. R., & Shahan, T. A. (2019). Contingency, contiguity, and causality in conditioning: Applying information theory and Weber’s Law to the assignment of credit problem. Psychological Review.

Timothy Shahan

Spotlight by Dr. Timothy Shahan.


How does this publication fit into your line of research/inquiry? What makes this publication special?

At a general level, my research focuses on the processes involved in learning and adaptive behavior, especially voluntary actions guided by their consequences (known as instrumental or operant behavior in psychology). A fundamental problem in instrumental behavior is how an organism knows which actions produce which consequences (also known as the assignment of credit problem in the field of artificial intelligence).

What makes this publication special for me is that this fundamental problem is the very one that got me interested in the field when I was an undergraduate. My first scientific poster as an undergrad and my first two publications (over 20 yrs. ago) were on this topic. Since then, it has always remained a bit of a hobby for me as a scientist. Although the bulk of my research has focused on other issues in instrumental behavior (e.g., its role in decision making, addiction, and relapse), I would occasionally do an experiment and publish a paper on this topic. But, the truth is that I was not making much meaningful progress because I was approaching the problem from a very traditional perspective.

Traditional approaches to instrumental learning (and artificial intelligence) assume that a contingency between an action and a consequence is what produces instrumental learning. A contingency is generally believed to do this because it arranges for two events (e.g., an action and a reward) to repeatedly occur close together in time (it arranges for temporal contiguity), thus strengthening a connection between them. In short, traditional accounts say that contingency works only through contiguity. But, strangely, the problem of how to define or quantify contingency had never been adequately solved because existing measures required invoking problematic counts of the number of times that two events did not occur together (the action did not occur and the consequence did not occur). In this paper, we describe how to apply the quantitative framework of information theory to provide a formal measure of contingency that does not require the impossible task of counting the number of times that something did not happen. In addition, we suggest that contiguity is not what drives learning, but that contiguity has its effects because of its effects on contingency as measured in this way. We provide data from experiments showing that the new measure provides an excellent account of decreases in an instrumental action produced by degradations in contingency with action-independent or delayed rewards. We suggest that this measure (or something close to it) is likely what the brain is computing to solve the assignment of credit problems.

I anticipate that this publication will have a major impact on my future work. It suggests an entirely new way to conceptualize the fundamental nature of all the other questions about instrumental learning my research as addressed, and it suggests a myriad of new questions about the fundamental processes governing adaptive behavior and learning.

How were your students involved in this publication? Who are the colleagues that you published with? How did you collaborate?

My former Ph.D. student Andrew Craig (now an assistant professor at SUNY Upstate Medical University) was a key player in this work. We starting working on this project a number of years ago when he was still a student here. The work was a collaborative effort between us and Randy Gallistel at Rutgers, who is the first author of the paper. Randy is renowned computational neuroscientist who has long been pushing multiple subfields (e.g., cognitive neuroscience, behavioral neuroscience, conditioning & learning) to reconceptualize how learning and memory processes work in the brain. Randy’s work on applications of information theory to Pavlovian conditioning started influencing my research about a decade ago. Through a series of conversations resulting from some interactions at a conference, Randy and I decided to start collaborating to work on an information-theory based account of contingency and the assignment of credit problem. The collaboration happened over several years and countless emails and video conferences. Andy Craig and I conducted the empirical work with animals here at USU, and Randy did the computational heavy lifting at Rutgers. We all worked on multiple versions of the paper over the course of a number of years.

What are the ripple effects from this publication? Or, how do you see this affecting the field?

If we have our way, this publication will convince both psychologists and those working on artificial intelligence that our quantitative treatment of contingency is superior to existing contiguity-based conceptions of how the brain solves the assignment of credit problem. The computations involved in our measure of contingency turn out to be fundamentally at odds with one of psychology’s oldest laws--the Law of Effect. Thus, like much of Randy Gallistel’s previous work, our work here is likely to be highly controversial to scientists working in multiple fields (a fact that became abundantly clear during the peer review process). Whether or not workers in those various fields will come to accept it in time remains to be seen. Regardless, we believe that it is fundamentally on the right track.

See the Publication