The Fifth Elephant OSAI meet-up - Hyderabad edition

The Fifth Elephant OSAI meet-up - Hyderabad edition

Call for Proposals - make a submission; give visibility to your work

Namburi Manikanta

DevSecOps: Using Open-Source Security Tools with Local & Managed LLMs in the SDLC

Submitted Sep 12, 2025

Session Description:

Secure SDLC is no longer just “scan and forget.” In this talk, I’ll show how open-source LLMs locally hosted and managed—can work inside your CI/CD pipeline to make security findings actionable: from static and dynamic analysis to human-readable remediation steps, code diffs, and auto-fix PRs. We’ll stitch together popular OSS scanners (for code, dependencies, containers, and secrets) and feed their findings into LLMs, whether self-hosted with llama.cpp or managed services like Claude and ChatGPT, to prioritize and propose precise fixes—without locking into a single approach.

A end-to-end demo that runs entirely on GitHub Actions: the pipeline scans application code and Docker images, generates SBOMs, discover vulnerabilities, and leverages both local and cloud LLMs to produce patches or hardening guidance. We’ll also cover adding a lightweight DAST step against a preview build. The result is a developer-first, privacy-preserving, and flexible DevSecOps loop that shortens “detect ➜ understand ➜ fix” from days to minutes.

Takeaways:

  • A reproducible blueprint to plug open-source scanners your CI and enhance results with either local or managed LLMs, producing prioritized findings and auto-generated fixes/PR comments.
  • Practical insights on hybrid pipelines: balancing privacy, performance, and cost by mixing self-hosted and managed LLMs.

Target Audience:

  • DevOps / Platform and Security Engineers who own CI/CD pipelines and want pragmatic AppSec.
  • Homelabers and OSS practitioners interested in self-hosting CI runners and LLM models.

Architecture:

  1. Trigger: Pull Request opens → GitHub Actions job kicks in.
  2. Scan Stage:
    • SAST (Semgrep, Bandit, Hadolint)
    • Secrets (Gitleaks)
    • SBOM + Vulns (Syft, Trivy, Grype, Dockle)
  3. Aggregate Stage: Normalize outputs into a JSON schema (findings.json).
  4. LLM Remediation Stage:
    • Local LLMs with code snippits: llama.cpp with Qwen/DeepSeek ~7b-8b models.
    • Managed LLms with sbom results: Claude or ChatGPT via APIs.
    • Prompt = structured JSON + code snippet → response = prioritized fixes + diffs.
  5. Feedback Stage:
    • Post PR comments with suggested diffs/explanations.
    • (Optional) auto-hardening PR with safe autofixes.
  6. Evaluation/Policy: Fail build only for high/critical findings; others are warnings.

Setup Details (used in demo):

  1. Local LLMs:
    • GPU Server with rx6600 8GB vram for running DeepSeek-Coder-7b Model quantized to q8_0 and Qwen-Coder-2.5-7b quantized to q2_k (~20t/s)
    • CPU Only Server with 32Cores and 126GB Memory (offloading tasks - slow inferencing)
  2. Github Actions Self-Hosted Runners (Optional)

About Me:

I am Rushikesh Darunte, working at Oraczen as a DevOps Engineer. I focus on building CI/CD pipelines, GitOps workflows, and maintaining cloud/on-prem infrastructure. Outside of work, I maintain a homelab with multiple servers, where I self-host multiple services and nowadays experimenting with running LLM models locally.

Linkedin: https://www.linkedin.com/in/rushikesh-darunte-758565226/
Github: https://github.com/x64nik
site: https://rushidarunte.com/

Comments

{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Hosted by

Jump starting better data engineering and AI futures

Supported by

Community sponsor