AI-Driven Software Development: Security Practices Struggle to Keep Pace With Rapid Changes

9

Report: AI Is Rewriting Software Faster than It Can Be Secured

Artificial intelligence has become the driving force behind modern software development, dramatically accelerating development velocity. However, according to new research from Black Duck, security practices are failing to keep pace, creating a dangerous gap in enterprise cybersecurity as the software supply chain emerges as a critical attack surface.

Black Duck's latest report, "Navigating Software Supply Chain Risk in a Rapid-Release World," reveals that while 95% of organizations now rely on AI tools to generate code, only 24% apply comprehensive IP, license, security, and quality evaluations to that AI-generated code. This disconnect exposes organizations to significant risks that traditional AppSec programs weren't designed to handle.

The AI development revolution outpaces security measures

The adoption of AI in development workflows has become nearly universal. Engineering teams are leveraging various AI technologies at scale:

  • Nearly two-thirds of organizations report using proprietary AI/ML elements
  • 57% rely on AI coding assistants
  • 49% incorporate open source AI/ML models into their software

Despite widespread AI adoption, evaluation practices remain inconsistent. Only 76% check AI-generated code for security risks, 56% evaluate code quality, and 54% assess IP or licensing risk. Most concerning, only 24% perform all four essential checks—security, quality, IP, and licensing.

This gap leaves organizations vulnerable to hidden licensing violations, protected IP contamination, insecure code patterns, and embedded secrets that can spread across the supply chain. Adding to the concern, the report references external research indicating nearly half of AI-generated code snippets contain exploitable insecure patterns.

"By 2030, 95% of code is expected to be AI-generated. Even now, in 2025, it is reported to be around 30% at large enterprises and close to 90-95% at small startups," said Saumitra Das, Vice President of Engineering at Qualys. "The key word to keep in mind is 'generated'. This is more code being generated than humans can reasonably even review for correctness, functionality, readability and security issues."

Ironically, confidence levels remain high despite these vulnerabilities. Ninety-five percent of respondents express at least moderate confidence in their ability to secure AI-generated code, with 77% claiming they are very or extremely confident—a misplaced confidence according to the data.

Organizations must recognize that AI implementation brings significant risks and challenges to businesses beyond just operational benefits, including security vulnerabilities in the generated code.

Supply chain attacks already prevalent

The security gap created by accelerated AI development is not theoretical. Sixty-five percent of organizations experienced a software supply chain attack in the past year, with common attack types including:

  • Malicious dependencies (30%)
  • Unpatched vulnerabilities (28%)
  • Zero-day vulnerabilities (27%)
  • Malware injected into build pipelines (14%)

Nearly 40% of affected organizations experienced multiple types of supply chain attacks, highlighting how quickly vulnerabilities can compound.

The report emphasizes that AI fundamentally changes the scale and complexity of software risk. AI tools can introduce undocumented dependencies, licensing ambiguity, protected IP without attribution, and rapid code changes that outpace manual review processes.

Jason Soroko, Senior Fellow at Sectigo, warns: "Organizations should assume that AI-generated code expands their software supply chain risk, not just their development speed. This leaves large blind spots in provenance, obligations, and exploitable flaws. AI can also amplify dependency sprawl and introduce opaque third-party components that traditional AppSec programs were not built to inventory or govern at rapid-release cadence."

The critical importance of API security

As AI-generated code increasingly powers modern applications, comprehensive API performance testing and security validation become essential components of a robust defense strategy. APIs often serve as the primary integration points between AI systems and other software components, making them particularly vulnerable to exploitation if not properly secured.

SBOMs and monitoring: underutilized solutions

Transparency through Software Bills of Materials (SBOMs) emerges as one of the report's most consistent themes. Organizations that generate, validate, and operationalize SBOMs consistently outperform their peers:

  • 51% of organizations always validate supplier SBOMs
  • These organizations are 15 percentage points more prepared to evaluate third-party software
  • 59% of them remediate critical vulnerabilities within one day, compared to 45% overall

Yet, SBOM maturity remains uneven. Only 38% produce SBOMs for all software, and many generate them infrequently, limiting their usefulness in real-time risk response.

While 98% of organizations use automated AppSec tools, many struggle with effectiveness due to high false-positive rates (37%), poor coverage of transitive dependencies (33%), and difficulty prioritizing findings by exploitability or business impact (32%).

"Security teams can close the gap by treating AI output like third-party software and enforcing the same controls by default inside the developer workflow," Soroko advises. "Start with dependency management because organizations that track and manage open source dependencies well report far higher preparedness."

Technical debt considerations

The rapid acceleration of AI-assisted development can significantly contribute to technical debt accumulation in software projects if proper code quality and security measures aren't implemented. Organizations must balance development speed with systematic approaches to managing this debt before it becomes unmanageable.

Implications for businesses and technology leaders

For CISOs and security leaders, the report suggests several critical actions:

  • Implement AI governance as a core security control, not an innovation afterthought
  • Treat AI-generated code like third-party software, subject to the same scrutiny
  • Make SBOMs, continuous monitoring, and automated remediation mandatory

Development and DevSecOps teams need to recognize that speed without visibility creates systemic risk. Security tooling must integrate directly into CI/CD pipelines to keep pace with AI-driven velocity, and dependency governance should be viewed as a business enabler, not just a compliance requirement.

For boards and executives, the stakes are high—42% of respondents say software supply chain risk is already a board-level issue, directly tied to revenue protection, customer trust, and regulatory exposure.

Das from Qualys suggests: "We need to use AI models that are diverse in their training datasets to review the generated code. We need automation via for example MCP that can take any code being compiled and send it to vendor A for security reviews, understand the findings, and use vendor B to automate the patching of the issues found."

Implementing a zero-trust approach to AI-generated code

Organizations should consider implementing a zero-trust security model for AI-generated code, treating all code as potentially compromised until proven otherwise through rigorous validation. According to NIST guidelines on software supply chain security, this approach significantly reduces risk exposure from untrusted sources.

How to use this information

Organizations can take several immediate steps to address these challenges:

  1. Implement comprehensive evaluation protocols for all AI-generated code that include security, quality, IP, and licensing checks.

  2. Develop and maintain SBOMs for all software components to improve visibility into the supply chain.

  3. Integrate automated security tools directly into development pipelines that can keep pace with AI-accelerated development.

As AI continues to transform software development, the organizations that integrate secure SDLC practices, SBOM validation, automated monitoring, and AI governance will define the next generation of resilient enterprises. The question for cybersecurity leaders is not whether AI is reshaping the software supply chain—it's whether their security programs can evolve quickly enough to keep up.

You might also like