From Deepfakes to Quantum Computing: The Dangerous Misappropriation of AI Technology

The comprehensive report ‘Cybercrime Trends 2024‘ by sosafe (in German) highlights how rapidly artificial intelligence (AI) is spreading. From 30 million users now, the number of users is expected to increase to 700 million in 2030.

One of the downsides and risks of AI are deepfakes and voice cloning, which are used to commit fraud – including in misinformation campaigns. It has now become possible for AI tools to go so far as to execute CAPTCHA codes – part of a previously proven protection mechanism. It has also become easier than ever to program and train GPT chatbots with AI.

In turn, the report identifies among the dangerous trends the tendency to ‘harvest now, decrypt later’ (HNDL). It involves collecting encrypted data now. After time has passed, criminals then try to achieve decryption of the data obtained by hacking or social engineering using quantum computing.

Finally, the US Cybersecurity and Infrastructure Security Agency has highlighted some underestimated risks of 5G mobile technology:

  • complex network vulnerabilities
  • supply chain attacks through hardware and software
  • inherited vulnerabilities due to old infrastructure
  • dependence on insecure proprietary solutions
    etc.

However, the most visible trend for Internet users is the trend of manipulating public opinion, thus achieving negative influence on political and business decisions. Disinformation campaigns are orchestrated cheaply and easily. Given all these factors, it is essential for politics, business and citizens to implement defensive methods. The report makes a plethora of recommendations for companies and individuals to protect themselves.

Thorsten Koch, MA, PgDip
Policyinstitute.net
2 July 2024

Author: author

Leave a Reply

Your email address will not be published. Required fields are marked *