Hands-On Exercises
Build two real cybersecurity workflow agents using Deloitte use cases. Each exercise takes ~45 minutes with full step-by-step guidance.
Firewall Rule Optimizer
Your organization's network security team exports firewall rule reports periodically. These reports contain hundreds of rules accumulated over years β many are outdated, overly permissive, or redundant. Manual review is time-consuming and error-prone.
You'll build a Kindo Workflow Agent that:
- Ingests a firewall rule report
- Analyzes each rule against security best practices
- Categorizes rules into four cleanup categories
- Outputs a structured report with confidence levels and metadata
Output Categories
| Category | Description | Risk Level |
|---|---|---|
| Risky rules | Rules that expose the network to known threats (e.g., allow inbound from any source to sensitive ports) | π΄ High |
| Over-permissive rules | Rules broader than necessary (e.g., allow all protocols when only TCP/443 is needed) | π‘ Medium |
| Unused rules | Rules with zero or near-zero hit counts over the analysis period | π’ Low |
| Redundant rules | Rules that duplicate or are fully covered by other rules | π’ Low |
- Kindo account with access to the training instance
- Sample firewall rule report (provided by facilitator)
- Sign in to your Kindo instance
- Navigate to the Agents tab
- Click Create an Agent
- Select Workflow Agent
- Configure the agent:
- Name:
Firewall Rule Optimizer - Description:
Analyzes firewall rule reports and categorizes rules for cleanup β identifies risky, over-permissive, unused, and redundant rules with confidence levels.
- Name:
- In the Agent Configuration panel, click Add Knowledge Store
- Upload the sample firewall rule report (CSV or text format)
- Optionally, upload a reference document with your organization's firewall policy standards
- Click + to add a step β Select LLM Step
- Name the step:
Analyze and Categorize Rules - Enter the following prompt:
You are a senior network security engineer performing a firewall rule audit.
TASK:
Analyze the firewall rule report provided in the knowledge store. For each rule, categorize it into exactly ONE of the following categories:
1. RISKY β Rules that create significant security exposure (e.g., any-to-any rules, inbound access to sensitive ports from untrusted sources, rules allowing dangerous protocols)
2. OVER-PERMISSIVE β Rules that are broader than necessary (e.g., allow all ports when specific ports would suffice, allow all protocols when only one is needed, overly broad source/destination ranges)
3. UNUSED β Rules with zero or negligible hit counts during the analysis period
4. REDUNDANT β Rules whose traffic is already fully covered by another rule with equal or higher priority
ANALYSIS GUIDELINES:
- Consider the rule's source, destination, service/port, action (permit/deny), and hit count
- For RISKY classification: any rule allowing inbound traffic from "any" to critical infrastructure ports (22, 3389, 1433, 3306, 5432) should be flagged
- For OVER-PERMISSIVE: compare the rule's scope against the principle of least privilege
- For UNUSED: rules with 0 hits over 90+ days are unused; rules with <10 hits over 90 days are near-unused
- For REDUNDANT: check if a broader rule already covers the same traffic path
- If a rule could belong to multiple categories, choose the highest-risk category
OUTPUT FORMAT:
For each rule, output a row with these columns:
| Rule ID | Rule Name | Category | Confidence | Risk Level | Requestor | Application | Last Hit | Recommendation |
CONFIDENCE LEVELS:
- HIGH β Clear indicators support the categorization
- MEDIUM β Some ambiguity but categorization is likely correct
- LOW β Insufficient data to be certain; manual review recommended
After the table, provide:
1. A summary count of rules per category
2. Top 5 highest-priority rules to remediate immediately
3. Recommended next steps for the security team
- Be specific about the role β "senior network security engineer" sets the right context
- Define output format explicitly β the model follows table structures well
- Include edge case handling β "if a rule could belong to multiple categoriesβ¦"
- Add confidence levels β makes the output actionable (high confidence = auto-fix, low = manual review)
- Click + to add another step β Select LLM Step
- Name the step:
Format Cleanup Report - Enter the following prompt:
You are formatting the firewall rule analysis into an executive-ready cleanup report.
Take the categorized rule analysis from the previous step and produce:
SECTION 1 β EXECUTIVE SUMMARY
- Total rules analyzed
- Breakdown by category (count and percentage)
- Overall risk assessment (Critical / High / Medium / Low)
- Estimated effort to remediate (based on rule count per category)
SECTION 2 β IMMEDIATE ACTION ITEMS
List the top 10 highest-priority rules to address, sorted by risk. For each:
- Rule ID and name
- Why it's flagged
- Recommended action (remove, tighten, merge, review)
- Estimated impact if left unaddressed
SECTION 3 β CATEGORY DETAILS
For each category, list all rules in a table sorted by confidence level (HIGH first).
SECTION 4 β RECOMMENDATIONS
- Quick wins (high confidence, low effort to fix)
- Rules requiring manual review (low confidence)
- Suggested review cadence going forward
- Integration recommendations (e.g., connect to CrowdStrike for threat-informed prioritization)
Keep the tone professional and direct. This report goes to the security operations manager.
- Click Generate / Run to execute the agent
- Review the output:
- Are the categorizations accurate?
- Do the confidence levels make sense?
- Is the executive summary useful?
- Iterate: Try adjusting the prompt β change the confidence threshold, add a category (e.g., "Expired" for rules with end dates in the past), ask for JSON or CSV output
Discussion Points
After completing the exercise, discuss with the group:
- Accuracy: How well did the agent categorize the sample rules? Where did it struggle?
- Prompt refinement: What prompt changes improved the output most?
- Production readiness: What would you need to add before using this in production? (e.g., integration with your firewall management system, automated ticketing for remediation)
- Extensions:
- Add a Trigger Agent that runs this analysis automatically when a new rule report is uploaded
- Connect to CrowdStrike to enrich the analysis with threat intelligence
- Connect to Jira/ServiceNow to auto-create remediation tickets for high-confidence findings
- Add a scheduled run (weekly/monthly audit cadence)
Troubleshooting
| Issue | Solution |
|---|---|
| Agent doesn't see the uploaded file | Ensure the file is added to the Knowledge Store, not just the chat |
| Output is too generic | Add more specific examples to the prompt; include your organization's security standards in the Knowledge Store |
| Categories overlap too much | Strengthen the "choose the highest-risk category" instruction; add explicit examples for edge cases |
| Hit count data missing | If the sample report doesn't include hit counts, remove the UNUSED category or note it requires hit count data |
Pathfinder: NIST CSF Compliance Mapper
Your organization needs to demonstrate compliance with the NIST Cybersecurity Framework (CSF). Compliance teams collect evidence β policies, standards documents, configuration screenshots, audit reports β and manually map each to the relevant NIST CSF controls. This is labor-intensive, error-prone, and must be repeated for each audit cycle.
You'll build a Kindo Workflow Agent ("Pathfinder") that:
- Ingests evidence documents (policies, standards, configuration data)
- Maps each document to relevant NIST CSF controls
- Evaluates compliance status per control
- Produces a structured compliance report
NIST CSF Functions (reference)
- Kindo account with access to the training instance
- Sample evidence documents (provided by facilitator): policies, standards, or configuration screenshots
- NIST CSF reference (provided by facilitator or at NIST.gov)
- Sign in to your Kindo instance
- Navigate to the Agents tab
- Click Create an Agent
- Select Workflow Agent
- Configure the agent:
- Name:
Pathfinder β NIST CSF Compliance Mapper - Description:
Reviews evidence and supporting documents, maps them to NIST CSF controls, and evaluates compliance status. Produces a structured compliance report.
- Name:
This agent needs two types of reference material:
- NIST CSF Framework Reference
- Upload a NIST CSF control catalog (the facilitator will provide this)
- This gives the agent the full list of controls to map against
- Evidence Documents
- Upload the sample policies, standards, and/or configuration screenshots
- In production, these would be the actual audit evidence collected by the compliance team
- Click + to add a step β Select LLM Step
- Name the step:
Map Evidence to NIST CSF Controls - Enter the following prompt:
You are a cybersecurity compliance analyst performing a NIST CSF mapping exercise.
TASK:
Review each evidence document provided in the knowledge store. For each document, identify which NIST CSF controls it supports or addresses.
MAPPING GUIDELINES:
- A single document may map to multiple NIST CSF controls
- A single control may be supported by multiple documents
- Only map a document to a control if there is a clear, defensible connection
- If a document partially addresses a control, note it as partial coverage
- Do not force mappings β if a document doesn't clearly relate to any control, note it as "Unmapped"
OUTPUT FORMAT:
Produce a mapping table with these columns:
| Document Name | NIST CSF Control ID | Control Name | Function (GV/ID/PR/DE/RS/RC) | Mapping Rationale | Coverage Level |
COVERAGE LEVELS:
- FULL β The document directly and completely addresses the control requirements
- PARTIAL β The document addresses some but not all aspects of the control
- REFERENCE β The document provides supporting context but is not primary evidence
- NONE β Included for completeness; no meaningful mapping found
After the mapping table, provide:
1. A coverage summary: how many controls have FULL, PARTIAL, REFERENCE, or no coverage
2. A list of NIST CSF controls with NO evidence mapped to them (gaps)
3. Documents that couldn't be mapped to any control (orphan evidence)
- Specificity matters: Four coverage levels prevent binary (mapped/not mapped) output
- Gap identification is key: The compliance team cares most about what's NOT covered
- Mapping rationale: Forces the model to explain its reasoning β makes output auditable
- Click + to add another step β Select LLM Step
- Name the step:
Evaluate Compliance Status - Enter the following prompt:
You are a senior compliance assessor evaluating NIST CSF compliance based on the evidence mapping from the previous step.
TASK:
For each NIST CSF control that has at least one mapped evidence document, evaluate the overall compliance status.
ASSESSMENT CRITERIA:
- COMPLIANT β Sufficient evidence demonstrates the control is fully implemented and operating effectively
- PARTIALLY COMPLIANT β Evidence shows the control is implemented but with gaps (e.g., policy exists but enforcement is incomplete, or coverage is limited to some systems)
- NON-COMPLIANT β Evidence shows the control is not implemented, or implementation is fundamentally inadequate
- INSUFFICIENT EVIDENCE β The mapped documents don't provide enough information to make a determination. This is NOT the same as non-compliant β it means we need more evidence.
OUTPUT FORMAT:
SECTION 1 β COMPLIANCE SCORECARD
| NIST CSF Function | Total Controls | Compliant | Partially Compliant | Non-Compliant | Insufficient Evidence | Not Assessed |
(One row per function: GV, ID, PR, DE, RS, RC)
SECTION 2 β DETAILED ASSESSMENT
For each assessed control:
| Control ID | Control Name | Status | Evidence Documents | Key Findings | Recommendations |
SECTION 3 β PRIORITY REMEDIATION
List the top 10 controls to prioritize for remediation, considering:
- Non-compliant controls in high-risk functions (Protect, Detect, Respond)
- Controls with partial compliance that could be quickly brought to full compliance
- Controls flagged as insufficient evidence (quick evidence collection wins)
SECTION 4 β GAPS ANALYSIS
- Controls with no evidence (from the mapping step)
- Functions with the lowest compliance rates
- Recommended evidence collection strategy for gap areas
Keep the tone objective and evidence-based. Every compliance determination must reference specific evidence.
- Click Generate / Run to execute the agent
- Review the output:
- Does the evidence-to-control mapping make sense?
- Are the compliance assessments defensible?
- Are the gaps identified accurately?
- Iterate:
- Upload additional evidence documents and re-run
- Change the framework (could this same pattern work for ISO 27001? SOC 2?)
- Ask for a specific output format (XLSX-compatible table, executive summary)
Discussion Points
After completing the exercise, discuss with the group:
- Accuracy: How well did the agent map evidence to controls? Were there false positives?
- Coverage vs. compliance: What's the difference between "we have evidence mapped to this control" and "we're actually compliant"? How does the agent handle this distinction?
- Scalability: This exercise used a few sample documents. How would this work with 500+ evidence artifacts? (Discuss: chunking strategies, batch processing, hierarchical analysis)
- Framework portability: Could the same agent be adapted for ISO 27001, SOC 2, PCI DSS, or CMMC? What would change?
- Extensions:
- Add a Trigger Agent that re-runs the assessment when new evidence is uploaded
- Connect to Jira/ServiceNow to auto-create remediation tickets for non-compliant controls
- Build a scheduled agent for quarterly compliance re-assessment
- Use Canvas to build a live compliance dashboard
- Add a chatbot that compliance teams can ask "What evidence do we have for PR.AC-1?"
Troubleshooting
| Issue | Solution |
|---|---|
| Mappings are too broad (everything maps to everything) | Tighten the prompt: add "Only map if there is a direct, specific relationship" and include examples of good vs. bad mappings |
| Agent misidentifies NIST CSF control IDs | Ensure the NIST CSF reference in the Knowledge Store uses the correct control taxonomy (CSF 2.0 vs 1.1). Specify the version in the prompt. |
| Compliance assessments are always "Insufficient Evidence" | The sample data may be too sparse. This is realistic β discuss with the group how to improve evidence collection. |
| Output is too long to read | Add a constraint: "Limit the detailed assessment to controls that are Non-Compliant or Partially Compliant. Summarize Compliant controls in a separate count." |
| Configuration screenshots aren't parsed well | Text-based evidence works better than screenshots. In production, extract text from screenshots before uploading, or use OCR pre-processing. |