No Result
View All Result
Global Finances Daily
  • Alternative Investments
  • Crypto
  • Financial Markets
  • Investments
  • Lifestyle
  • Protection
  • Retirement
  • Savings
  • Work & Careers
No Result
View All Result
  • Alternative Investments
  • Crypto
  • Financial Markets
  • Investments
  • Lifestyle
  • Protection
  • Retirement
  • Savings
  • Work & Careers
  • Login
Global Finances Daily
No Result
View All Result
Home Protection

How to deploy DAST to manage AI risks

May 5, 2023
in Protection
0
How to deploy DAST to manage AI risks


Generative artificial-intelligence models such as ChatGPT and GitHub Copilot can help write software code, but doing so presents new challenges to developers and application-security specialists.

Organizations must use AppSec testing methods like SCA, SAST and DAST to vet AI-generated code for errors, vulnerabilities and other hidden issues. It’s also essential to have a DevSecOps culture in place that can quickly spot and remediate AI-created problems and provide the skilled human supervision necessary to use AI coding safely.

The potential pitfalls of AI code generation

We’ve gone over the problems posed by AI-created software at length, but here’s a quick recap.

AIs create error-filled and insecure code. Forty percent of programs written by GitHub Copilot included at least one of MITRE’s top 25 most common vulnerabilities in a 2022 New York University study. Stack Overflow has banned ChatGPT-generated code for being too buggy.

Invicti researcher Kadir Arslan found that Copilot made rookie mistakes, including leaving web pages open to SQL injection and using easily cracked hashing algorithms.

“You have to be very careful and treat Copilot suggestions only as a starting point,” Arslan wrote in an October 2022 Invicti blog post.

AIs can be tricked into revealing secrets or performing unethical tasks. I’ve used specially worded instructions, or “prompts,” to get around ChatGPT’s internal restrictions and make the AI create a phishing email and write basic malware. More ambitious “prompt injections” fool the AI into revealing other users’ queries, or embed code within prompts so that the code can be executed.

Prompt injection isn’t always necessary. Microsoft’s Bing AI chatbot glibly told reporters that its secret code name was “Sydney.” Samsung staffers fed proprietary data into ChatGPT when seeking fresh solutions to technical problems, not realizing that everything the AI ingests becomes part of its training set. (ChatGPT parent company OpenAI now lets you switch off your query history to prevent this.)

AIs may recreate proprietary code or malware. A lot of undesirable data becomes part of the AI training set. That’s fine in the long run, because the AI needs to learn to tell good from bad.

Yet we’ve already seen examples of GitHub Copilot reproducing GPL code, and copyrighted code is just as susceptible to being “recreated” by an AI. Copyrighted code in your software might expose you to litigation and licensing fees; GPL code might force your whole project to become open-source.

There’s also the risk that AI-generated code may contain malware reproduced from its training set, either ingested accidentally or deliberately fed into the system by malicious actors.

“As the adoption [of AI coding] creeps up,” said Invicti Chief Technology Officer and Head of Security Research Frank Catucci in a recent SC Magazine webinar, “the focus or the bullseye, the target, if you will, will be created on perhaps poisoning the well or poisoning the code that comes from these training datasets.”

AIs “hallucinate” facts and can be exploited accordingly. Large-language-model AIs will make up facts and sources to make their replies sound more authoritative, a phenomenon known as “AI hallucination.”

This sounds amusing, but AI hallucinations can have real-world consequences. Invicti researchers found that when given a coding task, ChatGPT reached out to online open-source code libraries that didn’t exist.

To see if other instances of ChatGPT might also be calling out to these non-existent libraries, the Invicti researchers placed garbage code in a directory using the same name and online location as one of the fake libraries — and got several hits over the next few days.

That indicates that ChatGPT coding hallucinations can be reproduced. It also creates an opportunity for malefactors to poison legitimate projects by “squatting” on code libraries that AIs believe should exist.

“We had a library recommended that did not exist,” said Catucci. “We were able to create one and find hits and traffic being directed to it, obviously with benign code, but it could have very well been malicious.”

How SAST, DAST and SCA help mitigate AI coding threats

As with all coding bugs and vulnerabilities, the best way to catch errors made by AI-assisted code generation is to use automated tools that apply methods such as software composition analysis (SCA), static application security testing (SAST) and dynamic application security testing (DAST) during the software development life cycle (SDLC).

SCA should flag bits of code that might be someone else’s intellectual property. SAST will examine the written code itself for vulnerabilities and other mistakes, although you will need a separate SAST tool for every coding language you use.

As soon as elements of the project can be executed as software, then DAST will monitor inputs and outputs for signs of security flaws. A hybrid approach, known as interactive application security testing (IAST) combines elements of SAST and DAST to examine code while it’s running, probing the application from both inside and outside.

Organizations have a variety of automated scanning tools they can use to implement DevSecOps, with each providing a different array of functions. (Invicti.com)

“There are dangerous gaps in security coverage without DAST in place,” says Patrick Vandenberg, director of product marketing at Invicti, in an April blog post. “You must have SAST, SCA, and DAST working together to improve coverage and find more vulnerabilities.”

Modern DAST tools do more than just watch the “black box” of a running application. They can automatically analyze potential flaws and even test them to weed out false positives, a process that Invicti calls “proof-based scanning.”

They also expand the overview of the potential attack surface, looking into web and cloud assets to maximize visibility, and can integrate with continuous integration/continuous delivery (CI/CD) tools and include compliance modules. Because modern DAST tools integrate aspects of SAST, they can be used to catch errors earlier in the SDLC than legacy DAST tools.

“There might be something that’s developed in OpenAI that you’re not going to find with SAST or SCA,” Catucci said during the SC Magazine webinar. “There might be a vulnerability in there that’s only present when you’re essentially having this application spin up … You would never know that with more static tests, whereas you would know that with the dynamic test.”

Creating a DevSecOps culture

More essential than using the right tools, however, is creating the right culture. Just as an organization might merge the missions of software developers and IT operations staffers were merged to create DevOps, security practitioners need to be added to the mix to create DevSecOps.

Developers may be reluctant to work with security staffers who want to pick over code and (purportedly) slow down projects. That’s why it helps to designate one member of each development team as a “security champion” who can liaise between the two groups — and ease developers into adopting security best practices.

“A security champion isn’t someone who wins hacking contests (though that’s certainly a plus) but one who champions the security message,” wrote Invicti’s Meaghan McBee in an August 2022 blog post. “They work daily to relay essential updates, surface and resolve common pain points, lean in on threat and vulnerability management, and provide more clarity on security needs to everyone from leadership down.”

Automated tools that provide continuous testing and scanning will take the load off both developers and security personnel, minimizing friction between the teams and letting them focus on their jobs.

“When security tests are automated and run on every check-in, developers can find and fix issues much more efficiently,” said Invicti Distinguished Architect Dan Murphy in a February 2022 blog post. “The goal is to treat the introduction of a critical security vulnerability just like a code change that causes unit tests to fail — something that is fixed quickly, without requiring the overhead of meetings and internal triage.”

Last of all, but perhaps most importantly, you don’t want to automate too much, especially if you’re using AIs to help your developers code. Let humans supervise both the coding process and the testing process. You want to know what the AI is up to, not least of all because the AI itself may not know.

“AI-based code generators are likely to become a permanent part of the software world,” wrote Catucci recently. “In terms of application security, though, this is yet another source of potentially vulnerable code that needs to pass rigorous security testing before being allowed into production.”



Editorial Team

Editorial Team

Related Posts

My Favorite JBL Over-Ear Headphones Are $100 Off During Amazon's Big Spring Sale
Protection

My Favorite JBL Over-Ear Headphones Are $100 Off During Amazon's Big Spring Sale

March 26, 2026
Artists Love the XP-Pen Magic Note Pad Drawing Tablet, and It's $140 Off During Amazon's Big Spring Sale
Protection

Artists Love the XP-Pen Magic Note Pad Drawing Tablet, and It’s $140 Off During Amazon’s Big Spring Sale

March 26, 2026
The Garmin Forerunner 265 Is a Pretty Good Buy During Amazon's Big Spring Sale
Protection

The Garmin Forerunner 265 Is a Pretty Good Buy During Amazon’s Big Spring Sale

March 26, 2026
This Hydrow Rowing Machine Delivers a Full-Body Workout, and It's $300 Off for Amazon's Big Spring Sale
Protection

This Hydrow Rowing Machine Delivers a Full-Body Workout, and It's $300 Off for Amazon's Big Spring Sale

March 26, 2026
What Happens Now That Meta and YouTube Were Found Legally Negligent
Protection

What Happens Now That Meta and YouTube Were Found Legally Negligent

March 26, 2026
If I Had a Home Gym, This Is the Storage Rack I'd Buy During Amazon's Spring Sale
Protection

If I Had a Home Gym, This Is the Storage Rack I’d Buy During Amazon’s Spring Sale

March 26, 2026
Load More
Next Post
White House meeting pushes ‘Responsible AI’ via cajoling, DEFCON and policies

White House meeting pushes ‘Responsible AI’ via cajoling, DEFCON and policies

Popular News

  • Oil prices fall on reports of a U.S. ceasefire proposal with Iran

    Oil prices fall on reports of a U.S. ceasefire proposal with Iran

    0 shares
    Share 0 Tweet 0
  • BlackRock’s Fink on why he won’t cash out private-credit investors: ‘Those are the rules, live with it.’

    0 shares
    Share 0 Tweet 0
  • How to Contact Hilton Customer Service

    0 shares
    Share 0 Tweet 0
  • SC Lowy to launch interval fund amid private credit pivot

    0 shares
    Share 0 Tweet 0
  • The Best Luxury Hotels in Kansas City, Whether You’re Visiting for Barbecue or the World Cup

    0 shares
    Share 0 Tweet 0

Latest News

My Favorite JBL Over-Ear Headphones Are $100 Off During Amazon's Big Spring Sale

My Favorite JBL Over-Ear Headphones Are $100 Off During Amazon's Big Spring Sale

March 26, 2026
0

We may earn a commission from links on this page. Deal pricing and availability subject to change after time of...

Karen Hao: Profit motives drive AI development, current technologies harm society, and labor exploitation is rampant in the industry

Karen Hao: Profit motives drive AI development, current technologies harm society, and labor exploitation is rampant in the industry

March 26, 2026
0

Key takeaways AI development is driven by profit motives, potentially leading to superior civilizations. Current AI technologies are causing significant...

Bitcoin

Bitcoin Activity Index Keeps Declining: Demand Still Weak?

March 26, 2026
0

Trusted Editorial content, reviewed by leading industry experts and seasoned editors. Ad Disclosure CryptoQuant’s Network Activity Index for Bitcoin has...

Artists Love the XP-Pen Magic Note Pad Drawing Tablet, and It's $140 Off During Amazon's Big Spring Sale

Artists Love the XP-Pen Magic Note Pad Drawing Tablet, and It’s $140 Off During Amazon’s Big Spring Sale

March 26, 2026
0

We may earn a commission from links on this page. Deal pricing and availability subject to change after time of...

Global Finances Daily

Welcome to Global Finances Daily, your go-to source for all things finance. Our mission is to provide our readers with valuable information and insights to help them achieve their financial goals and secure their financial future.

Subscribe

  • About Us
  • Contact
  • Privacy Policy
  • Terms of Use
  • Editorial Process

© 2025 All Rights Reserved - Global Finances Daily.

No Result
View All Result
  • Alternative Investments
  • Crypto
  • Financial Markets
  • Investments
  • Lifestyle
  • Protection
  • Retirement
  • Savings
  • Work & Careers

© 2025 All Rights Reserved - Global Finances Daily.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.