Skip to main content
Added in: 5.8.0 Prowler Lighthouse AI is a Cloud Security Analyst chatbot that helps you understand, prioritize, and remediate security findings in your cloud environments. It’s designed to provide security expertise for teams without dedicated resources, acting as your 24/7 virtual cloud security analyst. Prowler Lighthouse

Set Up Lighthouse AI

Learn how to configure Lighthouse AI with your preferred LLM provider

Capabilities

Prowler Lighthouse AI is designed to be your AI security team member, with capabilities including:

Natural Language Querying

Ask questions in plain English about your security findings. Examples:
  • “What are my highest risk findings?”
  • “Show me all S3 buckets with public access.”
  • “What security issues were found in my production accounts?”
Natural language querying

Detailed Remediation Guidance

Get tailored step-by-step instructions for fixing security issues:
  • Clear explanations of the problem and its impact
  • Commands or console steps to implement fixes
  • Alternative approaches with different solutions
Detailed Remediation

Enhanced Context and Analysis

Lighthouse AI can provide additional context to help you understand the findings:
  • Explain security concepts related to findings in simple terms
  • Provide risk assessments based on your environment and context
  • Connect related findings to show broader security patterns
Business Context Contextual Responses

Important Notes

Prowler Lighthouse AI is powerful, but there are limitations:
  • Continuous improvement: Please report any issues, as the feature may make mistakes or encounter errors, despite extensive testing.
  • Access limitations: Lighthouse AI can only access data the logged-in user can view. If you can’t see certain information, Lighthouse AI can’t see it either.
  • NextJS session dependence: If your Prowler application session expires or logs out, Lighthouse AI will error out. Refresh and log back in to continue.
  • Response quality: The response quality depends on the selected LLM provider and model. Choose models with strong tool-calling capabilities for best results. We recommend gpt-5 model from OpenAI.

Getting Help

If you encounter issues with Prowler Lighthouse AI or have suggestions for improvements, please reach out through our Slack channel.

What Data Is Shared to LLM Providers?

The following API endpoints are accessible to Prowler Lighthouse AI. Data from the following API endpoints could be shared with LLM provider depending on the scope of user’s query:

Accessible API Endpoints

User Management:
  • List all users - /api/v1/users
  • Retrieve the current user’s information - /api/v1/users/me
Provider Management:
  • List all providers - /api/v1/providers
  • Retrieve data from a provider - /api/v1/providers/{id}
Scan Management:
  • List all scans - /api/v1/scans
  • Retrieve data from a specific scan - /api/v1/scans/{id}
Resource Management:
  • List all resources - /api/v1/resources
  • Retrieve data for a resource - /api/v1/resources/{id}
Findings Management:
  • List all findings - /api/v1/findings
  • Retrieve data from a specific finding - /api/v1/findings/{id}
  • Retrieve metadata values from findings - /api/v1/findings/metadata
Overview Data:
  • Get aggregated findings data - /api/v1/overviews/findings
  • Get findings data by severity - /api/v1/overviews/findings_severity
  • Get aggregated provider data - /api/v1/overviews/providers
  • Get findings data by service - /api/v1/overviews/services
Compliance Management:
  • List compliance overviews (optionally filter by scan) - /api/v1/compliance-overviews
  • Retrieve data from a specific compliance overview - /api/v1/compliance-overviews/{id}

Excluded API Endpoints

Not all Prowler API endpoints are integrated with Lighthouse AI. They are intentionally excluded for the following reasons:
  • OpenAI/other LLM providers shouldn’t have access to sensitive data (like fetching provider secrets and other sensitive config)
  • Users queries don’t need responses from those API endpoints (ex: tasks, tenant details, downloading zip file, etc.)
Excluded Endpoints: User Management:
  • List specific users information - /api/v1/users/{id}
  • List user memberships - /api/v1/users/{user_pk}/memberships
  • Retrieve membership data from the user - /api/v1/users/{user_pk}/memberships/{id}
Tenant Management:
  • List all tenants - /api/v1/tenants
  • Retrieve data from a tenant - /api/v1/tenants/{id}
  • List tenant memberships - /api/v1/tenants/{tenant_pk}/memberships
  • List all invitations - /api/v1/tenants/invitations
  • Retrieve data from tenant invitation - /api/v1/tenants/invitations/{id}
Security and Configuration:
  • List all secrets - /api/v1/providers/secrets
  • Retrieve data from a secret - /api/v1/providers/secrets/{id}
  • List all provider groups - /api/v1/provider-groups
  • Retrieve data from a provider group - /api/v1/provider-groups/{id}
Reports and Tasks:
  • Download zip report - /api/v1/scans/{v1}/report
  • List all tasks - /api/v1/tasks
  • Retrieve data from a specific task - /api/v1/tasks/{id}
Lighthouse AI Configuration:
  • List LLM providers - /api/v1/lighthouse/providers
  • Retrieve LLM provider - /api/v1/lighthouse/providers/{id}
  • List available models - /api/v1/lighthouse/models
  • Retrieve tenant configuration - /api/v1/lighthouse/configuration
Agents only have access to hit GET endpoints. They don’t have access to other HTTP methods.

FAQs

1. Which LLM providers are supported? Lighthouse AI supports three providers:
  • OpenAI - GPT models (GPT-5, GPT-4o, etc.)
  • Amazon Bedrock - Claude, Llama, Titan, and other models via AWS
  • OpenAI Compatible - Custom endpoints like OpenRouter, Ollama, or any OpenAI-compatible service
For detailed configuration instructions, see Using Multiple LLM Providers with Lighthouse. 2. Why a multi-agent supervisor model? Context windows are limited. While demo data fits inside the context window, querying real-world data often exceeds it. A multi-agent architecture is used so different agents fetch different sizes of data and respond with the minimum required data to the supervisor. This spreads the context window usage across agents. 3. Is my security data shared with LLM providers? Minimal data is shared to generate useful responses. Agents can access security findings and remediation details when needed. Provider secrets are protected by design and cannot be read. The LLM provider credentials configured with Lighthouse AI are only accessible to our NextJS server and are never sent to the LLM providers. Resource metadata (names, tags, account/project IDs, etc) may be shared with the configured LLM provider based on query requirements. 4. Can the Lighthouse AI change my cloud environment? No. The agent doesn’t have the tools to make the changes, even if the configured cloud provider API keys contain permissions to modify resources.