Over the past few years, Artificial Intelligence (AI) has completely changed the battleground for both cybercriminals and defenders. While nefarious actors have found increasingly inventive ways to put AI to use, new research shows that AI is modifying the abilities of security teams, transforming them into ‘super defenders’ that are faster and more effective.
The latest research shows that, regardless of their expertise level, security analysts are around 44 percent more accurate and 26 percent faster when using Copilot for Security. This is good news for IT teams at firms across the continent who are up against rising insidious threats.
Deepfakes alone increased tenfold over the past year, with the Sumsub Identity Fraud Report showing that the highest number of attacks were recorded in African countries such as South Africa and Nigeria.
Such attacks can have drastic financial implications for unsuspecting businesses. Recently, an employee at a multinational firm was scammed into paying $25 million (Sh3.25 billion) to a cybercriminal who used Deepfake to pose as a coworker at a video conference call.
The Cyber Signals report warns that these kinds of attacks are only going to become more sophisticated as AI evolves social engineering tactics. This is of concern for businesses in Africa, which is still a global cybercrime hotspot.
While Nigeria and South Africa estimate annual losses to cybercrime at around $500 million and R2.2 billion respectively, Kenya experienced its highest-ever number of cyberattacks last year, recording 860 million attacks.
A KnowBe4 survey of hundreds of employees across the continent revealed that 74 percent of participants were easily manipulated by a Deepfake. Fortunately, AI can also be used to help companies disrupt fraud attempts. In fact, Microsoft records around 2.5 billion cloud-based, AI-driven detections every day.
AI-powered defence tactics can take multiple forms. Beyond the use of tools like Copilot to enhance security posture, Microsoft’s Cyber Signals report offers four additional recommendations for local firms to better defend themselves in a rapidly evolving cybersecurity landscape.
Adopt a Zero Trust approach
The key is to ensure the organisation’s data remains private and controlled from end to end. Conditional access policies can provide clear, self-deploying guidance to strengthen the organisation’s security posture, and will automatically protect tenants based on risk signals, licensing and usage.
Enabling multifactor authentication for all users, especially for administrator functions, can also reduce the risk of account takeover by more than 99 percent.
Employees’ awareness drive
Aside from educating employees to recognise phishing emails and social engineering attacks, IT leaders can proactively share their organisations’ policies on the use and risks of AI. This includes specifying which designated AI tools are approved for enterprise and providing points of contact for access and information.
Apply vendor AI controls
Through clear and open practices, IT leaders should assess areas where AI can come in contact with their organisation’s data, including through third-party partners and suppliers. Anytime an enterprise introduces AI, the security team should assess the relevant vendors’ built-in features to ascertain the AI’s access to employees and teams using the technology.
Protect against prompt injections
Finally, it’s important to implement strict input validation for user-provided prompts to AI. Context-aware filtering and output encoding can help prevent prompt manipulation. Cyber risk leaders should also regularly update and fine-tune large language models to improve the models’ understanding of malicious inputs and edge cases.
As we look to secure the future, we must ensure that we balance preparing securely for AI and leveraging its benefits, because AI has the power to elevate human potential and solve some of our most serious challenges.
The writer is Microsoft Country Manager for Kenya.