New Microsoft service uses artificial intelligence for software security testing

Microsoft Corp. has launched a new tool that uses artificial intelligence to track down bugs in software. Previously known by the name of “Project Springfield,” the new tool introduced Friday uses a process called “fuzz testing.” It’s designed to find vulnerabilities in software by dumping large amounts of random data into software to see whether it triggers the software to crash or opens up vulnerabilities. Before, this process was undertaken manually by software developers, whereas the new “Security Risk Detection” service automates the process. “The Microsoft Security Risk Detection service is unique in that it uses artificial intelligence to ask a series of ‘what if’ questions to try to root out what might trigger a crash and signal a security concern,” Microsoft researcher David Molnar said in a blog post. “Each time it runs, it hones in on the areas that are most critical, looking for vulnerabilities that other tools that don’t take an intelligent approach might miss.” Although it’s open to anyone who develops software, the new service is being pitched as ideal for companies that build software themselves, modify off-the-shelf software or license open source offerings. Molnar added that the service, previously launched as a preview version last year, had already proved “especially…


Link to Full Article: New Microsoft service uses artificial intelligence for software security testing

Pin It on Pinterest

Share This