User-Centric Security Research Actions

We provide a discussion on relevant research actions that need to be taken to mitigate the threats, gaps, and challenges previously identified and reported in Appendix A.6. of document D4.3 of document D4.3.

  • RA6.1 – Security training techniques. Many devastating cyberattacks have been possible thanks to the lack of basic security training of organizations’ and firms’ personnel. Led by false a perception of the value of internal assets and the risks involved, the users tend to neglect their work environment safety and expose the organization and themselves to simple yet effective attacks. Effective security training should consider not only the technical aspects of the field but also the psychological and human nature of the trainees. Research on these themes is expected to identify more effective ways of teaching how to prevent security risks and.
    Threats: T6.1.1 – Mishandling of physical assets, T6.1.2 – Misconfiguration of systems, T6.1.3 – Loss of CIA on data assets, T6.2.2 – Illegal acquisition of information, T6.5.1 – Skill shortage/undefined cybersecurity curricula, T6.5.3 – Pivoting
    Gaps: G6.1 – Gaps on modelling user behavior, G6.2 – Gaps on the relation between user behavior and adverse security-related effects, G6.3 – Gaps on security information, G6.4 – Gaps on security training and education, G6.6 – Gaps on protection from online scammers
  • RA6.2 – Fight against disinformation. The increasingly concerning spread of disinformation through online means has shown tangible effects on sensitive topics, including politics, health, and discrimination. Disinformation may be carried out by various users with diverse intentions and motives, among which terrorism and propaganda. Fleets of human or automated operated accounts have been capable of shifting the outcome of national elections, instigating large violent events, and propagating forged information discrediting health organizations. Research directions in countering disinformation include adversarial techniques against the spread of conspiracy theories; monitoring and identification of the sources of disinformation and conspiracy trends; development and spread of a correct fact-checking culture through a network of national and international institutions; detection and mitigation of forgery techniques, like deep fakes.
    Threats: T6.2.1 – Profiling and discriminatory practices, T6.4.1 – Misinformation/disinformation campaigns, T6.4.2 – Smear campaigns/market manipulation, T6.4.3 – Social responsibility/ethics-related incidents, T6.5.1 – Skill shortage/undefined cybersecurity curricula
    Gaps: G6.3 – Gaps on security information, G6.4 – Gaps on security training and education, G6.6 – Gaps on protection from online scammers
  • RA6.3 – Social engineering and user behavior. Social engineering attacks are still the most effective against untrained users. The attacker may exploit gaps in the technical preparation, social and hierarchical assumptions, and stressful situations to exploit users and gain access to assets of the organization or higher interest targets. Social engineering techniques may include both network-based attacks, like phishing and social network influence, and in-loco attacks. Research in social engineering and user behavior would allow to better profile the limitations of users against an experienced attacker and characterize methodologies to mitigate the associated risks.
    Threats: T6.1.3 – Loss of CIA on data assets, T6.1.4 – Legal, reputational, and financial cost, T6.2.2 – Illegal acquisition of information, T6.3.1 – Organized criminal groups’ activity, T6.3.2 – State-sponsored organizations’ activity, T6.3.3 – Malicious employees or partners’ activity, T6.5.1 – Skill shortage/undefined cybersecurity curricula, T6.5.3 – Pivoting
    Gaps: G6.1 – Gaps on modelling user behavior, G6.2 – Gaps on the relation between user behavior and adverse security-related effects, G6.4 – Gaps on security training and education, G6.6 – Gaps on protection from online scammers
  • RA6.4 – AI applications for user security. Machine learning and AI-based techniques have increasingly large fields of application and are particularly effective in complex environments, like user interactions, where a complete definition of the tackled problem is impossible. This research can be applied to user security, i.e., to characterize users’ behavior, detect anomalies, analyze network traffic, provide automatic decision making, improve authentication techniques, etc. More advanced techniques include user identification through biological features, automatic source code, and software analysis and security automation. The expansion of this field of research is now even more necessary as attackers are adopting ML-based attacks.
    Threats: T6.1.2 – Misconfiguration of systems, T6.1.3 – Loss of CIA on data assets, T6.2.2 – Illegal acquisition of information, T6.3.1 – Organized criminal groups’ activity, T6.3.2 – State-sponsored organizations’ activity, T6.5.1 – Skill shortage/undefined cybersecurity curricula, T6.5.3 – Pivoting
    Gaps: G6.1 – Gaps on modelling user behavior, G6.2 – Gaps on the relation between user behavior and adverse security-related effects, G6.4 – Gaps on security training and education

Highlights on Identified Research Actions

The importance of research in the field of users’ security is ever increasing in an age where information and privacy are the most valuable assets. The advancement of the base level of security training is an effective means of mitigating a large class of threats. The improvement of training techniques is, therefore, expected to increase the users’ security awareness and efficacy of the already adopted security techniques. The spread of disinformation among less educated people in a time of stressful events has worsened the lack of trust in institutions, leading to violent events and non-compliance with health standards, and left users more vulnerable to social engineering attacks, such as persuasion and fraud. Finally, the expansion of machine learning and AI-based techniques in security has shown their effectiveness in many fields. Their application as a means to defend users from automated attacks is increasingly necessary as the malicious users evolve their methods and adopt AI-based techniques in their attacks.