~45 min Workflow Agent Block 2 Β· 0:45–1:30

Firewall Rule Optimizer

Your organization's network security team exports firewall rule reports periodically. These reports contain hundreds of rules accumulated over years β€” many are outdated, overly permissive, or redundant. Manual review is time-consuming and error-prone.

You'll build a Kindo Workflow Agent that:

  1. Ingests a firewall rule report
  2. Analyzes each rule against security best practices
  3. Categorizes rules into four cleanup categories
  4. Outputs a structured report with confidence levels and metadata

Output Categories

CategoryDescriptionRisk Level
Risky rules Rules that expose the network to known threats (e.g., allow inbound from any source to sensitive ports) πŸ”΄ High
Over-permissive rules Rules broader than necessary (e.g., allow all protocols when only TCP/443 is needed) 🟑 Medium
Unused rules Rules with zero or near-zero hit counts over the analysis period 🟒 Low
Redundant rules Rules that duplicate or are fully covered by other rules 🟒 Low
Prerequisites
  • Kindo account with access to the training instance
  • Sample firewall rule report (provided by facilitator)
Step-by-Step Guide
1
Create the Agent
⏱ 5 min
  1. Sign in to your Kindo instance
  2. Navigate to the Agents tab
  3. Click Create an Agent
  4. Select Workflow Agent
  5. Configure the agent:
    • Name: Firewall Rule Optimizer
    • Description: Analyzes firewall rule reports and categorizes rules for cleanup β€” identifies risky, over-permissive, unused, and redundant rules with confidence levels.
2
Set Up the Knowledge Store
⏱ 5 min
  1. In the Agent Configuration panel, click Add Knowledge Store
  2. Upload the sample firewall rule report (CSV or text format)
  3. Optionally, upload a reference document with your organization's firewall policy standards
πŸ’‘ The Knowledge Store gives your agent context. The more specific and structured the reference material, the better the agent's analysis will be.
3
Write the Analysis Prompt (LLM Step)
⏱ 10 min
  1. Click + to add a step β†’ Select LLM Step
  2. Name the step: Analyze and Categorize Rules
  3. Enter the following prompt:
You are a senior network security engineer performing a firewall rule audit.

TASK:
Analyze the firewall rule report provided in the knowledge store. For each rule, categorize it into exactly ONE of the following categories:

1. RISKY β€” Rules that create significant security exposure (e.g., any-to-any rules, inbound access to sensitive ports from untrusted sources, rules allowing dangerous protocols)
2. OVER-PERMISSIVE β€” Rules that are broader than necessary (e.g., allow all ports when specific ports would suffice, allow all protocols when only one is needed, overly broad source/destination ranges)
3. UNUSED β€” Rules with zero or negligible hit counts during the analysis period
4. REDUNDANT β€” Rules whose traffic is already fully covered by another rule with equal or higher priority

ANALYSIS GUIDELINES:
- Consider the rule's source, destination, service/port, action (permit/deny), and hit count
- For RISKY classification: any rule allowing inbound traffic from "any" to critical infrastructure ports (22, 3389, 1433, 3306, 5432) should be flagged
- For OVER-PERMISSIVE: compare the rule's scope against the principle of least privilege
- For UNUSED: rules with 0 hits over 90+ days are unused; rules with <10 hits over 90 days are near-unused
- For REDUNDANT: check if a broader rule already covers the same traffic path
- If a rule could belong to multiple categories, choose the highest-risk category

OUTPUT FORMAT:
For each rule, output a row with these columns:
| Rule ID | Rule Name | Category | Confidence | Risk Level | Requestor | Application | Last Hit | Recommendation |

CONFIDENCE LEVELS:
- HIGH β€” Clear indicators support the categorization
- MEDIUM β€” Some ambiguity but categorization is likely correct
- LOW β€” Insufficient data to be certain; manual review recommended

After the table, provide:
1. A summary count of rules per category
2. Top 5 highest-priority rules to remediate immediately
3. Recommended next steps for the security team
πŸ’‘
Prompt Engineering Tips:
  • Be specific about the role β€” "senior network security engineer" sets the right context
  • Define output format explicitly β€” the model follows table structures well
  • Include edge case handling β€” "if a rule could belong to multiple categories…"
  • Add confidence levels β€” makes the output actionable (high confidence = auto-fix, low = manual review)
4
Add the Report Formatting Step (LLM Step)
⏱ 10 min
  1. Click + to add another step β†’ Select LLM Step
  2. Name the step: Format Cleanup Report
  3. Enter the following prompt:
You are formatting the firewall rule analysis into an executive-ready cleanup report.

Take the categorized rule analysis from the previous step and produce:

SECTION 1 β€” EXECUTIVE SUMMARY
- Total rules analyzed
- Breakdown by category (count and percentage)
- Overall risk assessment (Critical / High / Medium / Low)
- Estimated effort to remediate (based on rule count per category)

SECTION 2 β€” IMMEDIATE ACTION ITEMS
List the top 10 highest-priority rules to address, sorted by risk. For each:
- Rule ID and name
- Why it's flagged
- Recommended action (remove, tighten, merge, review)
- Estimated impact if left unaddressed

SECTION 3 β€” CATEGORY DETAILS
For each category, list all rules in a table sorted by confidence level (HIGH first).

SECTION 4 β€” RECOMMENDATIONS
- Quick wins (high confidence, low effort to fix)
- Rules requiring manual review (low confidence)
- Suggested review cadence going forward
- Integration recommendations (e.g., connect to CrowdStrike for threat-informed prioritization)

Keep the tone professional and direct. This report goes to the security operations manager.
5
Run and Review
⏱ 15 min
  1. Click Generate / Run to execute the agent
  2. Review the output:
    • Are the categorizations accurate?
    • Do the confidence levels make sense?
    • Is the executive summary useful?
  3. Iterate: Try adjusting the prompt β€” change the confidence threshold, add a category (e.g., "Expired" for rules with end dates in the past), ask for JSON or CSV output

Discussion Points

After completing the exercise, discuss with the group:

  1. Accuracy: How well did the agent categorize the sample rules? Where did it struggle?
  2. Prompt refinement: What prompt changes improved the output most?
  3. Production readiness: What would you need to add before using this in production? (e.g., integration with your firewall management system, automated ticketing for remediation)
  4. Extensions:
    • Add a Trigger Agent that runs this analysis automatically when a new rule report is uploaded
    • Connect to CrowdStrike to enrich the analysis with threat intelligence
    • Connect to Jira/ServiceNow to auto-create remediation tickets for high-confidence findings
    • Add a scheduled run (weekly/monthly audit cadence)

Troubleshooting

IssueSolution
Agent doesn't see the uploaded fileEnsure the file is added to the Knowledge Store, not just the chat
Output is too genericAdd more specific examples to the prompt; include your organization's security standards in the Knowledge Store
Categories overlap too muchStrengthen the "choose the highest-risk category" instruction; add explicit examples for edge cases
Hit count data missingIf the sample report doesn't include hit counts, remove the UNUSED category or note it requires hit count data
~45 min Workflow Agent Block 2 Β· 1:30–2:15

Pathfinder: NIST CSF Compliance Mapper

Your organization needs to demonstrate compliance with the NIST Cybersecurity Framework (CSF). Compliance teams collect evidence β€” policies, standards documents, configuration screenshots, audit reports β€” and manually map each to the relevant NIST CSF controls. This is labor-intensive, error-prone, and must be repeated for each audit cycle.

You'll build a Kindo Workflow Agent ("Pathfinder") that:

  1. Ingests evidence documents (policies, standards, configuration data)
  2. Maps each document to relevant NIST CSF controls
  3. Evaluates compliance status per control
  4. Produces a structured compliance report

NIST CSF Functions (reference)

GV
Govern
Establish and monitor cybersecurity risk management strategy and policy
ID
Identify
Understand organizational context, assets, risks, and supply chain
PR
Protect
Implement safeguards to ensure delivery of critical services
DE
Detect
Identify cybersecurity events in a timely manner
RS
Respond
Take action regarding a detected cybersecurity incident
RC
Recover
Restore capabilities impaired by a cybersecurity incident
Prerequisites
  • Kindo account with access to the training instance
  • Sample evidence documents (provided by facilitator): policies, standards, or configuration screenshots
  • NIST CSF reference (provided by facilitator or at NIST.gov)
Step-by-Step Guide
1
Create the Agent
⏱ 5 min
  1. Sign in to your Kindo instance
  2. Navigate to the Agents tab
  3. Click Create an Agent
  4. Select Workflow Agent
  5. Configure the agent:
    • Name: Pathfinder β€” NIST CSF Compliance Mapper
    • Description: Reviews evidence and supporting documents, maps them to NIST CSF controls, and evaluates compliance status. Produces a structured compliance report.
2
Set Up the Knowledge Store
⏱ 10 min

This agent needs two types of reference material:

  1. NIST CSF Framework Reference
    • Upload a NIST CSF control catalog (the facilitator will provide this)
    • This gives the agent the full list of controls to map against
  2. Evidence Documents
    • Upload the sample policies, standards, and/or configuration screenshots
    • In production, these would be the actual audit evidence collected by the compliance team
πŸ’‘ For best results, upload the NIST CSF reference as a structured document (markdown or CSV with control IDs, names, and descriptions). The more structured the reference, the more accurate the mapping.
3
Write the Mapping Prompt (LLM Step)
⏱ 10 min
  1. Click + to add a step β†’ Select LLM Step
  2. Name the step: Map Evidence to NIST CSF Controls
  3. Enter the following prompt:
You are a cybersecurity compliance analyst performing a NIST CSF mapping exercise.

TASK:
Review each evidence document provided in the knowledge store. For each document, identify which NIST CSF controls it supports or addresses.

MAPPING GUIDELINES:
- A single document may map to multiple NIST CSF controls
- A single control may be supported by multiple documents
- Only map a document to a control if there is a clear, defensible connection
- If a document partially addresses a control, note it as partial coverage
- Do not force mappings β€” if a document doesn't clearly relate to any control, note it as "Unmapped"

OUTPUT FORMAT:
Produce a mapping table with these columns:
| Document Name | NIST CSF Control ID | Control Name | Function (GV/ID/PR/DE/RS/RC) | Mapping Rationale | Coverage Level |

COVERAGE LEVELS:
- FULL β€” The document directly and completely addresses the control requirements
- PARTIAL β€” The document addresses some but not all aspects of the control
- REFERENCE β€” The document provides supporting context but is not primary evidence
- NONE β€” Included for completeness; no meaningful mapping found

After the mapping table, provide:
1. A coverage summary: how many controls have FULL, PARTIAL, REFERENCE, or no coverage
2. A list of NIST CSF controls with NO evidence mapped to them (gaps)
3. Documents that couldn't be mapped to any control (orphan evidence)
πŸ’‘
Prompt Engineering Tips:
  • Specificity matters: Four coverage levels prevent binary (mapped/not mapped) output
  • Gap identification is key: The compliance team cares most about what's NOT covered
  • Mapping rationale: Forces the model to explain its reasoning β€” makes output auditable
4
Write the Compliance Assessment Prompt (LLM Step)
⏱ 10 min
  1. Click + to add another step β†’ Select LLM Step
  2. Name the step: Evaluate Compliance Status
  3. Enter the following prompt:
You are a senior compliance assessor evaluating NIST CSF compliance based on the evidence mapping from the previous step.

TASK:
For each NIST CSF control that has at least one mapped evidence document, evaluate the overall compliance status.

ASSESSMENT CRITERIA:
- COMPLIANT β€” Sufficient evidence demonstrates the control is fully implemented and operating effectively
- PARTIALLY COMPLIANT β€” Evidence shows the control is implemented but with gaps (e.g., policy exists but enforcement is incomplete, or coverage is limited to some systems)
- NON-COMPLIANT β€” Evidence shows the control is not implemented, or implementation is fundamentally inadequate
- INSUFFICIENT EVIDENCE β€” The mapped documents don't provide enough information to make a determination. This is NOT the same as non-compliant β€” it means we need more evidence.

OUTPUT FORMAT:

SECTION 1 β€” COMPLIANCE SCORECARD
| NIST CSF Function | Total Controls | Compliant | Partially Compliant | Non-Compliant | Insufficient Evidence | Not Assessed |
(One row per function: GV, ID, PR, DE, RS, RC)

SECTION 2 β€” DETAILED ASSESSMENT
For each assessed control:
| Control ID | Control Name | Status | Evidence Documents | Key Findings | Recommendations |

SECTION 3 β€” PRIORITY REMEDIATION
List the top 10 controls to prioritize for remediation, considering:
- Non-compliant controls in high-risk functions (Protect, Detect, Respond)
- Controls with partial compliance that could be quickly brought to full compliance
- Controls flagged as insufficient evidence (quick evidence collection wins)

SECTION 4 β€” GAPS ANALYSIS
- Controls with no evidence (from the mapping step)
- Functions with the lowest compliance rates
- Recommended evidence collection strategy for gap areas

Keep the tone objective and evidence-based. Every compliance determination must reference specific evidence.
5
Run and Review
⏱ 10 min
  1. Click Generate / Run to execute the agent
  2. Review the output:
    • Does the evidence-to-control mapping make sense?
    • Are the compliance assessments defensible?
    • Are the gaps identified accurately?
  3. Iterate:
    • Upload additional evidence documents and re-run
    • Change the framework (could this same pattern work for ISO 27001? SOC 2?)
    • Ask for a specific output format (XLSX-compatible table, executive summary)

Discussion Points

After completing the exercise, discuss with the group:

  1. Accuracy: How well did the agent map evidence to controls? Were there false positives?
  2. Coverage vs. compliance: What's the difference between "we have evidence mapped to this control" and "we're actually compliant"? How does the agent handle this distinction?
  3. Scalability: This exercise used a few sample documents. How would this work with 500+ evidence artifacts? (Discuss: chunking strategies, batch processing, hierarchical analysis)
  4. Framework portability: Could the same agent be adapted for ISO 27001, SOC 2, PCI DSS, or CMMC? What would change?
  5. Extensions:
    • Add a Trigger Agent that re-runs the assessment when new evidence is uploaded
    • Connect to Jira/ServiceNow to auto-create remediation tickets for non-compliant controls
    • Build a scheduled agent for quarterly compliance re-assessment
    • Use Canvas to build a live compliance dashboard
    • Add a chatbot that compliance teams can ask "What evidence do we have for PR.AC-1?"

Troubleshooting

IssueSolution
Mappings are too broad (everything maps to everything)Tighten the prompt: add "Only map if there is a direct, specific relationship" and include examples of good vs. bad mappings
Agent misidentifies NIST CSF control IDsEnsure the NIST CSF reference in the Knowledge Store uses the correct control taxonomy (CSF 2.0 vs 1.1). Specify the version in the prompt.
Compliance assessments are always "Insufficient Evidence"The sample data may be too sparse. This is realistic β€” discuss with the group how to improve evidence collection.
Output is too long to readAdd a constraint: "Limit the detailed assessment to controls that are Non-Compliant or Partially Compliant. Summarize Compliant controls in a separate count."
Configuration screenshots aren't parsed wellText-based evidence works better than screenshots. In production, extract text from screenshots before uploading, or use OCR pre-processing.