Can we predict religious extremism?

Justin Lane
4 min readOct 12, 2017

--

With all the extremism (terrorism, white supremacy, etc.) we see in the world today, how can we better deal with and respond to the threats that keep arising? One way to better respond to a threat is to be able to predict this threat. The possibility of predicting extremism is exactly what I covered in a new article published in the journal Religion Brain & Behavior.

The first 50 people get free downloads with this link (nope not kidding).

In the article, I propose the idea that taking an information processing approach — that is to say, taking an approach that looks at how humans think about and manipulate information in their minds — can help us to do this.

But, you might say, religions are way to complex to be predicted.

Well, yes and no. I admit that religions are complex. However, that doesn’t mean that they are not predictable. Take the weather as an example. The weather is extremely complex. However, we can predict it in short time scales. But, because of how complex interactions work over time and are influenced by what happened today, the further out we try and predict, the less sure we are about our prediction (even with satellites).

Extremism, I argue, is a similar issue. We can use the mechanisms of human psychology to predict human actions just like we can use evaporation and condensation and temperature to predict the weather.

This leads to two obvious questions: 1) what do we need to do this? 2) why aren’t we doing this now?

Well, to begin with, we need to abandon the idea that humans are blank slates or that everything we do is learned through observation. Assuming humans are blank slates is simply false. Time and again we have experiments that have shown this not to be the case. Most recently, an experiment suggesting that humans can recognise facial features even before they’re born! These sorts of experiments demonstrate how strictly learning based approaches (such as behaviorism) have too many holes for us to really consider it a valid approach to extremism (or even culture, religion or psychology for that matter).

Then, we need to start to build new forms of social AI (or multi-agent AI). Because these AI systems behave similar to humans, we can use them to help predict radicalisation and extremism.

This brings us now to question 2, which I didn’t discuss in my paper. Why aren’t we doing it? I think its because of PR and politics. As I noted a while ago in my article for PrimeMind magazine, politicians are slow to embrace new technologies, particularly those as complex as MAAI based simulation and risk analysis. The public doesn’t quite push for it, nobody really lobbies for it, so it stays in the corporate world for the time being (with some exceptions, such as my consultancy work on a project funded by the Norwegian government). The other aspect is PR. Academia is not an industry of meritocracy, its an industry of advertisements and PR where an old guard of tenured professors (who can’t be fired for anything short of criminal behaviour) work to push forward classical and established ideas, using whatever resources they can to push their own agenda. This includes hiring PR consultants to disseminate their work to the public. In an article by Joe Brewer (who blocked me on medium for critiquing his work — otherwise I’d like the piece to you), you see him pushing the idea that cultural evolution needs PR (for those who are unaware, cultural evolution is a re-invented application of Darwinian principles to study human groups). This is because they have to be reframing the field as its main proponents begin to openly support ideas such as social Darwinism and argue that we should be applying this to public policy. On Aeon.com, a leader of the Cultural Evolution Society published an article titled “Social Darwinism is back: and this time its a good thing”; the title got heat, you can still see it in the URL at the link here, even though they’ve now edited the article to sound less endorsing of such a colonial idea; I’m glad to see this done, I only wish they’d be more sensitive to the history of what it is they propose before doing so. The fact is, these ideas are dangerous and cultural evolution is far too immature and unvalidated to be safe to base policy on. Its definitions are too lose to be usable for prediction in a hard sense of the word (prediction resulting from deduction). Nonetheless, having millions of dollars of research support from universities and religious organisations can allow for narrative control.

However, as with all aspects of science, as time goes on, the bad ideas start to die off as their critical foundations are chipped away at and we progress toward newer and better scientific approaches.

Already, in the field of modelling and simulation, engineering, and cognitive science, you see individuals adopting and working with the idea of Generative Emergence, an idea that has been foundation to my own work. As the father of generative emergence works on a number of government projects, I have hope that the winds of change are blowing.

--

--

Justin Lane
Justin Lane

Written by Justin Lane

I'm a researcher and consultant interested in how cognitive science explains social stability and economic events. My opinions are my own and only my own.

No responses yet