In 2017, MITRE wrote of an internal research program that explored a new take on product evaluation methodologies, utilizing the MITRE ATT&CK® knowledge base. While the program’s original focus was on the burgeoning endpoint detection market, the program quickly expanded to new technology domains, including industrial control system (ICS) detection and mobile threat defense technologies. In 2018, MITRE announced the creation of the ATT&CK Evaluations program, that built on this internal research program to develop an open and transparent test methodology to advance cybersecurity through threat-informed evaluations and publicly available results.
While ATT&CK Evaluations for Enterprise are in the process of conducting their third round of public evaluations, and ATT&CK Evaluations for ICS are in the process of their first round, we have also explored how to similarly impact mobile security solutions. Through our past experiences developing the ATT&CK for Mobile knowledge base, evaluating the types of solutions that address the threat, and dialogue with the community we concluded that while public evaluations in this domain is still a goal, there is initial leg work we need to do to bring the community up to a collective understanding of the mobile threat landscape.
This post serves as an introduction to catalyze the discussion around mobile threats. We will share some of the challenges we experienced in our internal research program and pose some additional questions to drive the conversation. In the subsequent weeks, we will follow with posts that will explore specific mobile threat scenarios we used in our pilot program. We will explain how we previously approached evaluating these scenarios and the challenges that come up while performing these types of evaluations. We then try to extract key lessons learned to hopefully further the discussion on mobile security solutions and their testing methodologies.
We invite feedback from the community on all aspects of this research, whether better insights into the threats that need to be defended against, types of products to evaluate, or how to evaluate technologies against these threats. Please reach out to the ATT&CK Evaluations team (firstname.lastname@example.org) with any comments or questions you might have to help shape our future work in this space.
Some example questions include:
What types of products/capabilities are in most need of evaluation?
What capabilities of products (e.g., detection, protection, response) are of most interest?
Are there specific adversaries, malware samples, or threat scenarios that are of specific interest?
The Challenges of Mobile Evaluations: Diversity in Solutions and Limited Intel
The diversity of the mobile security market and approaches is notable.
In some cases, a single vendor may offer all these capabilities, while in other cases products from multiple vendors may integrate with each other. Each of these solutions offer a unique perspective and value, but because of this, each require a different evaluation methodology that increases the complexity of the methodology.
Another challenge faced while developing test methodology is that the mobile operating systems place strict limitations on third-party applications, including MTD agents. Therefore, the capabilities and methods of operation of even a single market segment, such as MTD, is quite different than EDR products running on traditional enterprise PCs. The tests performed in enterprise PC environments may not be practical to perform and must be adapted for mobile environments. For example, MTD products may not be capable of detecting some forms of malicious application behavior in real time on user devices. They may instead analyze applications in a separate instrumented environment. Testing procedures must account for these differences.
The biggest problem industry faces is the lack of cyber threat intelligence on adversary attacks against mobile devices compared to what we see in the traditional enterprise PC environment. This makes it difficult to create realistic adversary emulation scenarios and prioritize which ATT&CK techniques to address. Our approach to address this limitation was to instead develop threat scenarios to act as hypothetical adversary activity based on the limited threat intelligence available to us as well as the capabilities advertised by MTD vendors.
These threat scenarios, used in our previous pilot effort, are intended as examples that illustrate the challenges of evaluating mobile security products. We are open to feedback on each of these threat scenarios, as well as input on additional threat scenarios that should be considered:
With this call for information on improved test methodologies, our most important request is to share whatever intelligence you have on threats in the mobile space with the ATT&CK for Mobile team.
In follow-on posts, we will dive into details on each of these threat scenarios including a description of the threat, how we tested products, challenges we faced, and areas where we could use community input. We look forward to increasing the dialogue around mobile threats and how to evaluate these capabilities. Also keep an eye out on the ATT&CK Blog for a series of posts that explore mobile security topics.
© 2021 MITRE Engenuity. Approved for Public Release. Document number AT0009