March 13, 2019

What Is Wrong With Our Strategic Communications?

A supporter of Russian opposition leader Alexei Navalny holds a poster depicting President Vladimir Putin during a rally in Vladivostok, Russia October 7, 2017. The writing on the poster reads, "Propaganda is the truth, corruption is good, Syria is peace, big brother is watching you".
A supporter of Russian opposition leader Alexei Navalny holds a poster depicting President Vladimir Putin during a rally in Vladivostok, Russia October 7, 2017. The writing on the poster reads, "Propaganda is the truth, corruption is good, Syria is peace, big brother is watching you".

No serious Western conference on security or international relations could take place these days without admiring the problem of disinformation.

There is, of course, a good reason for this, as this challenge has grown rapidly on a global scale and is still evolving thanks to our communicative incompetence (and—frankly—general laziness), but also because of greedy tech platforms, click-based journalism and technological advances in machine learning and big data science. Not to mention certain autocratic regimes that widely exploit disinformation tools for malicious purposes, domestically or internationally.

In truth, we are really good at describing the problems professionally, but not so good at providing the solutions. And to speak honestly, I am struggling to comprehend what is actually happening now in the universe of disinformation and strategic communication. Many conferences and various other types of event in recent years have been focusing on discussing terminology and definitions, clarifying nuances between numerous aspects and repeatedly presenting the same stories about “the Gerasimov doctrine”, “hybrid war versus information warfare”, the “Lisa case” in Germany and the “crucified boy” in Ukraine—as all of this would still be the basis of passionate argument in 2019. Could anyone suggest why it is so important to go through these topics over and over again? Are we collectively writing some kind of textbook for future generations?

I do understand that there is still no solid research-based approach to the newly emerged challenge of computational propaganda or foreign state-backed hostile intrusion into our cognitive space. But how, exactly, will expensive training on media literacy (overwhelmingly disliked by journalists) or always late ineffective debunking of “fake news” help us to move forward and be smarter and more innovative? It should be clear by now that, in such constantly and multilaterally evolving disinformation, something that brought us here will not take us forward.

Unfortunately, we still spend too much valuable time (and money) convincing each other how dangerous propaganda can be and how important our common countermeasures should be. Yes, it can be harmful and hazardous if it provokes changes in the way we consume information and has a long-term effect on the societal behaviour of our citizens. Shouldn’t we recall that the phenomenon of disinformation has been with humankind for centuries and is probably as inexhaustible as many other social deviations? We already know the intentions of our potential adversaries and we have learned how to recognise and describe their tools, but we need more hard data and more research-based evidence to predict the potential impact of modern disinformation.

Eventually, we take it for granted that a global challenge must be combated globally. But what about local action? Do we agree unquestioningly to create more bureaucratic bodies and authorities to deal with the problem, as if a separate powerful and well-resourced “EU commissioner on hybrid threats/resilience” would magically and effectively safeguard our information environment and cognitive space? Yes, that would be a clear signal that we Europeans were starting to take disinformation seriously. Evidently, the EU institutions have to use their leverage and play a more decisive role in strictly regulating the activities of arrogant tech giants, imposing adequate scrutiny and protecting citizens’ privacy without compromise, because many national governments are too small, too weak and poorly equipped for these juridical battles, or just unwilling to do it. With this important yet specific task, the playing field for the EU is pretty limited, because hostile disinformation does not create political circumstances but, rather, exploits weaknesses in society and internal problems locally, regionally or nationally.

So, what could be wrong with our strategic communications? Is it poor knowledge about what precisely makes our communication strategic? Is it a tailoring issue, some collision between horizontal versus vertical communication and a kind of refusal to recognise that, besides government bodies or (even more so) the actual key responders, are actors in civil society and the expert community who should be empowered and resourced? Is it over-mystification of opaque algorithmic disinformation? Or our temptation to build simplistically a counter-narrative mirroring that of the adversary, as if we should not have our own appealing story if we were not under attack. We are frustratingly too slow in taking resolute steps because we tend to initiate multilateral and time-consuming discussions on every single minor action or reaction. Instead we should be focusing on basic principles and frameworks and then leave the civic response alone to deliver them.

On the one hand, we have become more professional in monitoring external hybrid threats and got smarter in understanding their essence. Our sporadic, learning-by-doing experience contributed to improving the effectiveness of our countermeasures, which are still delivered reactively and defensively, not as a deterrent. Evidently, we learned that many of the adversary’s information activities are holistically integrated from the very start by design, not manually coordinated or adjusted as some might mistakenly think. Their primary goal is to create changes in behaviour, especially around election time, but many undesirable side-effects (areas of ambiguity, information fog, useful idiots, communicative disorientation and apathy, societal distrust, anxiety and fear, etc.) contribute greatly to the further polarisation of various audiences and therefore make it more difficult for our democracies to build consensus. Such tribalism is used effectively not just in virtual propaganda but also by interconnected groups of local actors, ideological sympathisers and inspired proxies who learn successfully from the adversary’s playbook and then apply its tools domestically.

At a recent big high-level conference, several panellists were asked how they would improve our strategic communications if they had one billion euros. Guess what? Many of them were confused and struggled to suggest any sensible action. We clearly lack fresh ideas on how to counter hostile disinformation and minimise its harmful effects on our societies. By the end of the discussion, one speaker had proposed investing more resources in educating data scientists and analysts. Yes, that is something I fully support. Moreover, I anticipate an imminent need for full and legal access to big data held by corporate entities. In addition, we should devote more resources to training multidisciplinary specialists on cognitive resilience and introduce some concrete measures on how to protect our own critical influencers in digital, societal, economic, cultural and religious matters. Many Western opinion-formers are high-value, low-risk informational and sometimes even physical targets for our adversaries.

What else? We should not forget to promote our own values and build positive images within the Western community, not just project them to the neighbourhood outside. Apparently simple things like self-confident repetition of our own story and decisive steps to reinforce it might become our strategic tool to overcome hostile propaganda. When tackling disinformation, it is always wise to recall a phrase from Alice in Wonderland: “Threats, promises and good intentions don’t amount to action”.