E808
Security Awareness

Security Topic: Artificial Intelligence Data Privacy

The Risks And Rewards Of Artificial Intelligence In Cybersecurity

For the past several years, cybercriminals have been using artificial intelligence to hack into corporate systems and disrupt business operations. But powerful new generative AI tools such as ChatGPT present business leaders with a new set of challenges.

Consider these entirely plausible scenarios:

  • A hacker uses ChatGPT to generate a personalized spear-phishing message based on your company’s marketing materials and phishing messages that have been successful in the past. It succeeds in fooling people who have been well trained in email awareness, because it doesn’t look like the messages they’ve been trained to detect.
  • An AI bot calls an accounts payable employee and speaks using a (deep-fake) voice that sounds like the boss’s. After exchanging some pleasantries, the “boss” asks the employee to transfer thousands of dollars to an account to “pay an invoice.” The employee knows they shouldn’t do this, but the boss is allowed to ask for exceptions, aren’t they?
  • Hackers use AI to realistically “poison” the information in a system, creating a valuable stock portfolio that they can cash out before the deceit is discovered.
  • In a very convincing fake email exchange created using generative AI, a company’s top executives appear to be discussing how to cover up a financial shortfall. The “leaked” message spreads wildly with the help of an army of social media bots, leading to a plunge in the company’s stock price and permanent reputational damage.

These scenarios might sound all too familiar to those who have been paying attention to stories of deep-fakes wreaking havoc on social media or painful breaches in corporate IT systems. But the nature of the new threats is in a different, scarier category because the underlying technology has become “smarter.”

Until now, most attacks have used relatively unsophisticated high-volume approaches. Imagine a horde of zombies — millions of persistent but brainless threats that succeed only when one or two happen upon a weak spot in a defensive barrier. In contrast, the most sophisticated threats — the major thefts and frauds we sometimes hear about in the press — have been lower-volume attacks that typically require actual human involvement to succeed. They are more like cat burglars, systematically examining every element of a building and its alarm systems until they can devise a way to sneak past the safeguards. Or they’re like con artists, who can build a backstory and spin lies so convincingly that even smart people are persuaded to give them money.

Now imagine that the zombies become smarter. Powered by generative AI, each one becomes a cat burglar, able to understand the design of your security systems and devise a way around them. Or imagine a con artist using generative AI to engage interactively with one of your employees, build trust, and dupe them into falling for the con.

This new age of AI-powered malware means companies can no longer use best-practice approaches that may have been effective mere months ago. Defense in depth — the strategy of installing the right security policies, implementing the best technical tools for prevention and detection, and conducting awareness drives to ensure that staff members know the security rules — will no longer be enough. A new era has dawned.

Using combinations of text, voice, graphics, and video, generative AI will unleash unknown and unknowable innovations in hacking. Successful defenses against these threats cannot yet be automated. Your company will need to move from acquiring tools and establishing rule-based approaches to developing a strategy that adapts to next-level AI-generated threats in real time. This will require both smarter tech and smarter employees.

As with every new technology, artificial intelligence (AI) comes with risks. Before utilizing the opportunities and rewards AI has to offer, it’s important to fully investigate its vulnerabilities in order to manage associated risks.

From ChatGPT to HackGPT: Meeting the Cybersecurity Threat of Generative AI
https://sloanreview.mit.edu/article/from-chatgpt-to-hackgpt-meeting-the-cybersecurity-threat-of-generative-ai/

The Risks And Rewards Of Artificial Intelligence In Cybersecurity
https://www.forbes.com/sites/forbestechcouncil/2023/05/22/the-risks-and-rewards-of-artificial-intelligence-in-cybersecurity/?sh=bc2915596db4

How Companies Can Use Generative AI And Maintain Data Privacy
https://www.forbes.com/sites/forbestechcouncil/2023/06/22/how-companies-can-use-generative-ai-and-maintain-data-privacy/?sh=4b4d9eda6cf4

Why artificial intelligence design must prioritize data privacy
https://www.weforum.org/agenda/2022/03/designing-artificial-intelligence-for-privacy/

AI and data privacy: protecting information in a new era
https://technologymagazine.com/articles/ai-and-data-privacy-protecting-information-in-a-new-era

How Generative AI Can Affect Your Business’ Data Privacy
https://www.forbes.com/sites/forbesbusinesscouncil/2023/05/01/how-generative-ai-can-affect-your-business-data-privacy/?sh=162013a7702d

#bharat #programmer #securityadvisor #financialadvisor #softwarehelps

Leave a Reply

Your email address will not be published. Required fields are marked *