A Landmine of AI risks for 2024

image_url

As businesses strive for innovation and efficiency, the critical importance of aligning technological advancements with robust compliance, governance, and security measures is unequivocal

The rapid integration of artificial intelligence (AI) into various facets of business operations has become both a necessity currently and a potential source of vulnerability going forward. Forrester, a renowned research and advisory firm, recently released its 2024 cybersecurity, risk, and privacy predictions, shedding light on the increasing reliance on AI-coding assistants, potential code flaws, and the resulting security challenges.

The Surge in AI-Coding Assistant Dependence

DevOps teams are increasingly turning to AI-coding assistants to enhance productivity by automating coding tasks. Forrester highlights this growing dependence, emphasising that teams often prioritise efficiency over thorough security checks. The report raises a red flag, warning that the combination of inconsistent compliance practices and the simultaneous use of multiple AI-coding assistants may lead to flawed AI code responsible for at least three publicly admitted breaches in 2024. The overarching concern extends to potential API security risks associated with these flaws.

The Democratisation of AI-Coding Assistants

The democratisation of generative AI is evident in the widespread adoption of AI-coding assistants, a trend expected to continue. Business and technology professionals indicate that nearly half of organisations are either piloting, implementing, or have already implemented these assistants. Gartner’s projection that 75% of enterprise software engineers will use AI coding assistants by 2028 underscores the increasing significance of these tools. However, the surge in demand has given rise to a new challenge – the proliferation of over 40 AI-coding assistants has led to a form of Shadow IT, where DevOps teams experiment with multiple assistants to optimise performance for specific tasks.

CISOs Facing a Balancing Act

For Chief Information Security Officers (CISOs), 2024 poses a significant challenge. Balancing the productivity gains offered by generative AI with the imperative for greater compliance, governance, and security becomes a critical task. The report emphasizes the essential role of compliance in safeguarding intellectual property, urging organizations to prioritize governance and guardrails. The ability of CISOs to triangulate innovation, compliance, and governance will be a measurable factor in determining a company’s competitive advantage in 2024.

The Imperative: Achieving AI’s Innovation Gains while Reducing Risk

Forrester’s predictions underscore the urgent need to align compliance, governance, and guardrails to ensure that the benefits of generative AI are realised with minimal risk. The report emphasises the critical role of governance and accountability in ensuring ethical AI usage, particularly as organisations transition from experimentation to full-scale implementation of new AI-based technologies.

A data-driven approach to behaviour change is recommended to address the inadequacies of existing security awareness training programs.

Forrester’s Predictions for 2024

  • Social Engineering Attacks Soar: Forrester predicts a substantial increase in social engineering attacks, leveraging generative AI tools. The report suggests that these attacks will rise from 74% of all breach attempts to 90% in 2024, indicating a heightened vulnerability of the human element in cybersecurity. Traditional security awareness training methods are deemed ineffective, prompting the need for a more data-driven approach to behaviour change.
  • Tightening Cyber Insurance Standards: With the integration of real-time telemetry data and advanced analytics, insurance carriers are expected to tighten their standards. Forrester anticipates the red-flagging of two tech vendors as high risk based on risk scoring and calculations derived from security services, tech partnerships, and data-driven insights from insurance claims.
  • Fines for Mishandling PII: Forrester suggests the possibility of a ChatGPT-based app facing fines for mishandling personally identifiable information (PII). This prediction underscores the vulnerability of identity and access management (IAM) systems to attacks, with a focus on Active Directory as a common target.
  • Regulatory Scrutiny on OpenAI: OpenAI is expected to face increased regulatory scrutiny, with ongoing investigations in Europe and the U.S. Forrester notes a new lawsuit in Poland for potential GDPR violations. The report highlights the challenges faced by third-party apps running ChatGPT, lacking the technical and financial resources of OpenAI.
  • Growth in Senior-Level Zero-Trust Roles: Forrester predicts a doubling of senior-level zero-trust roles across the global public and private sectors. The forecast is supported by the broader adoption of the NIST Zero Trust Architecture framework, indicating an increased demand for cybersecurity professionals with specific expertise.

The aforementioned cybersecurity predictions paint a comprehensive picture of the challenges and opportunities arising from the integration of AI into organisational processes. As businesses strive for innovation and efficiency, these serve as a timely reminder of the critical importance of aligning technological advancements with robust compliance, governance, and security measures. CISOs and cybersecurity professionals are urged to embrace a holistic approach, ensuring that the benefits of AI are harnessed without compromising on data security and ethical usage. The predictions provide a roadmap for organisations to navigate the evolving landscape of AI-related risks successfully.

Know more about the syllabus and placement record of our Top Ranked Data Science Course in KolkataData Science course in BangaloreData Science course in Hyderabad, and Data Science course in Chennai.

Leave us a Comment