Google’s artificial intelligence has identified 26 vulnerabilities in open-source projects, including a 20-year-old bug in OpenSSL, known as CVE-2024-9143. This “out-of-bounds memory violation” could cause program crashes and, in rare cases, execute malicious code.
To streamline the detection process, Google developers used “fuzz testing,” which involves feeding random data into the code to spot weaknesses. By leveraging the capabilities of large language models (LLMs), the process became even more efficient. LLMs were instrumental in simulating a developer’s workflow, from writing and testing code to analyzing identified failures. This method allowed AI to test 272 software projects, revealing the vulnerabilities, including the long-overlooked OpenSSL issue.
Why the Bug Went Undetected
Experts suggest that the bug remained unnoticed for two decades due to the complexity of testing specific code scenarios. Additionally, OpenSSL’s code was widely regarded as thoroughly tested, which diverted attention from further scrutiny. Researchers highlighted that standard tests cannot cover all execution paths, as different settings or configurations can trigger unique behaviors that expose vulnerabilities. Fortunately, this specific error poses a low security risk due to the minimal likelihood of exploitation.
Future of AI in Code Security
Traditionally, developers manually wrote code for fuzz tests, but Google aims to minimize human involvement, notes NIX Solutions. The company plans to enhance AI’s capabilities not only to identify vulnerabilities but also to propose fixes automatically. “Our goal is to reach a level where manual verification is no longer necessary,” Google stated.
This innovative approach marks a significant step forward in securing open-source projects. We’ll keep you updated as more advancements in AI-driven code security emerge.