A primer to Cyber Influence Operations

Today’s reflection will be something very different from the topics I’ve been blogging on. I feel that cyber influence operations are a key topic to understand in today’s day and age. Since knowledge has become a key pillar of society, much of modern warfare has also shifted to this plane, with the main weapons being misinformation and control over information streams and channels. Thus, it is more crucial than ever to reflect on how cyber influence operations work, and how they can impact the social space much more than we realise. I will be using this article by David Tayouri as a base to summarise key concepts on the topic.

As this post is primarily for my learning purposes, it will be written in the style similar to notes from a college lecture where I try to take down important information as it is, rather than as an opinion piece. It will hence be a mixture of my own words and direct quotes from the article, not as an intention to plagiarise, but to capture the information for my own internalisation. My hope is that my summary will serve well to others who wish to get a quick gist of the topic.

What are Cyber Influence Operations?

Much of communication today occurs over the internet, social media and social applications (e.g. WhatsApp and Telegram). These platforms have become the key tools we use to stay updated and keep in touch with our loved ones. They have also become effective tools of influence. Sharing a post, tweeting an opinion, contributing to a discussion in a forum, or sharing a sentimental or political picture, are each one of many ways to convince others of our points of view, even to the fundamental level (e.g. ideological, philosophical).

Using cyber tools and methods to manipulate public opinion is called a cyber influence operation. They are focused efforts to understand and engage key audiences to create, strengthen, or preserve conditions favourable for advancing interests, policies and objectives, through the use of coordinated programmes, plans, themes, messages and products.

The challenge is identifying which influence operations are legitimate or malicious. It is acceptable to promote and idea or a product, and it is obviously unacceptable to incite groups to violence. But grey areas exist, and there is no systematic way of differentiating between legitimate and malicious influence operations.

Commonly known influence operations include fake news, disinformation, political astroturfing and information attacks. They may also be a part of a broader hybrid warfare approach, which combines them with cyber attacks, conventional military action, and covert kinetic attacks.

These operations may have different purposes: influencing psychologically, hurting morale, influencing public awareness, instilling a lack of control and the inability protect the normative way of life, and much more. As some of these operations may cause psychological damage, they are also known as disinformation attacks. Some of these are aimed at controlling the responses of target groups, which is known as perception management.

For example, during peacetime, influence operations may aim to promote desired ideas, or lead groups in preferred directions. A political party that openly campaigns in an attempt to swing votes may be considered legitimate. However, the same being performed by a foreign country would be deemed as intervening in a sovereign country’s domestic affairs. During times of conflict or war, influence operations can be used to turn public opinion against the government, thereby hurting morale through the cultivating of distrust in its ability to maintain control of the situation.

How Cyber Influence Operations are conducted

Tayouri outlines five broad steps of conducting cyber influence operations:

1. Define the goal: Is it to build – by promoting a subject, strengthening it, or improving public opinion of it? Or is it to harm – by attacking opponents, weakening adversaries, and creating negative public opinion?

2. Determine coverage and audience: Is the audience wide, or a specific group or few groups? Is the target a key group of influencers? Radial or consensus groups? What is the demographic profile, including the age, gender, race, religion, etc?

3. Determine where it will be conducted: Which social network and/or forum will the operation be mounted? What is the interaction between the platform and its intermediaries?

4. Determine the tools: What will be used? E.g. fake profiles, bots, trolls, etc. Each has its pros and cons. Fake profiles may have better reputation, but require manual intervention. Bots are easily automated and programmed, but can be more easily identified. Trolls are used in specific instances, such as spreading aggressive, negative content.

5. Define the messages: What are the key messages? How will the messaging be tailored to its specific audiences?

The following are some techniques used by cyber influence operators, which are similar to those used in propaganda campaigns:

1. Stimulate strong emotions: Use fear, hope, anger, frustration and empathy to move audiences towards a desired goal, such as arousing them or suppressing their critical thinking.

2. Simplify information and tasks: Influence operations need not rely on just lies and untruths. They often also build on accurate and truthful information, but spin them into half-truths and opinions. They can also use simple stories that are familiar and trusted, and borrow metaphors, imagery and repetition to sell the message. Oversimplifcation is an effective technique because people seek to reduce complexity. By employing such methods, operators can hijack the audience’s capacity for critical thinking.

3. Respond to audience’s needs and values: Convey messages, themes and language that appeal directly to a specific group. This may include racial or ethnic identity, hobbies, religious beliefs, values, favourite celebrities, or even personal aspirations and hopes for the future. This might even be the deepest human values, such as the need to feel loved or be loved, to feel a sense of belonging and a sense of place. This makes the messages personal and relevant, which helps to keep audiences engaged.

4. Attacking opponents: Attack the credibility of one’s opponents, call into question their legitimacy, question accuracy and even their character. This can even include excluding entire groups of people, incite hatred or cultivating indifference. This encourages an “us-them”, “either-or” divide to make messaging easier to convey.

Identifying Cyber Influence Operations

Tayouri proposes that identifying social influence is key to identifying cyber influence operations. Social influence is defined by three categories.

1. Influential actor: Or influencer. Cyber operators approach influencers to maximise their reach. The different indicators for identifying influencers include active minds, trendsetters, social presence and impact, social activity, charisma, expertise, authority, number of followers and friends, etc. An actor’s influence is also measured by whether the message is shared beyond his/her/its network, and by how much, whether the actor causes others to read his/her/its message, and the speed at which the message is read and shared.

Meh, hopefully not this kind, ever.

2. Influential interaction: Can be measured by the number of times a message has been shared and/or quoted, the types of reactions caused by the messages, the number of readers/listeners reached, and if the message brings in a large group of unique visitors.

One commonly used platform on cyberspace is weblogs. Indicators for measuring influence of weblogs include the network centrality score (i.e. measures reputation of the site; is it central to a network, or just one with a limited number of contacts?) hyperlink authority score (e.g. the number of links to the blog), site traffic score (i.e. number of visitors), and community activity score (e.g. number of comments that the blog evokes).

3. Influential social networks: Indicators include the social distance between two actors, reciprocity, multiplexity, size of the network, density, connectivity, centrality, emotional value, group cohesion, and clustering.

After determining social influence, the next step is usually differentiating between legitimate and malicious influence operations. This can be very context-dependent, especially since what is legitimate in one context may not be so in another. It is usually clearer if cyber tools (e.g. the use of bots), foreign intervention and fake news are involved. The next challenge then is to be able to identify that such means were used.

The following are additional parameters that may be used to identify a cyber influence operation:

1. Use of avatars, bots and trolls: Avatars are virtual identities in social media, which hide their operators’ true identity. Bots are small agents, which are programmed to automatically respond to specific posts or publish automatic posts to promote their programmed idea/product. Tactics to identify bots include spotting traits like a sleepless account, engaging in high-volume tweeting, replying to content with certain keywords, using stolen profile images, having unreal profile names, showing significant gaps in account activity, etc.

2. Publishing of posts and news outside of the country: It is more legitimate when people try to convince other people and promote their own ideas or beliefs, as long as this is done in their own country or done from another country but without hiding their identity. But if someone from another country impersonates a local citizen, it is suspicious and should definitely be investigated.

3. Publishing fake news: Researchers from Stanford found that 62% of American adults get their news from social media, with most of the most popular fake news stories being shared on Facebook, and that many people exposed to these fake news stories reporting that they believe them. There is good reason to believe that fake news stories are a good tactic and hence, are a good indicator to identify cyber influence operations.

4. A sudden change of public opinion: Changes of public opinion in a short period may indicate foreign intervention, as changes in opinion tend to be gradual. For example, if a leading candidate loses his/her lead in a day or two, this might indicate external intervention.

5. Publishing radically negative phrases: Such language might indicate an incitement operation. One example of a red flag is if a political group’s credibility is questioned through the use of extremely negative expressions.

It is important to note that a cyber influence operation is not identified through just one indicator, but through a consideration of multiple factors and context.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Create your website at WordPress.com
Get started
%d bloggers like this: