Overview
Claude Code can integrate with various third-party services and infrastructure to meet enterprise requirements. This page provides an overview of available integration options and helps you choose the right configuration for your organization.
Provider comparison
Feature | Anthropic | Amazon Bedrock | Google Vertex AI |
---|---|---|---|
Regions | Supported countries | Multiple AWS regions | Multiple GCP regions |
Prompt caching | Enabled by default | Contact AWS for enablement | Contact Google for enablement |
Authentication | API key | AWS credentials (IAM) | GCP credentials (OAuth/Service Account) |
Cost tracking | Dashboard | AWS Cost Explorer | GCP Billing |
Enterprise features | Teams, usage monitoring | IAM policies, CloudTrail | IAM roles, Cloud Audit Logs |
Integration options
Cloud providers
Amazon Bedrock
Use Claude models through AWS infrastructure with IAM-based authentication and AWS-native monitoring
Google Vertex AI
Access Claude models via Google Cloud Platform with enterprise-grade security and compliance
Corporate infrastructure
Corporate Proxy
Configure Claude Code to work with your organization’s proxy servers and SSL/TLS requirements
LLM Gateway
Deploy centralized model access with usage tracking, budgeting, and audit logging
Mixing and matching settings
Claude Code supports flexible configuration options that allow you to combine different providers and infrastructure:
Understand the difference between:
- Corporate proxy: An HTTP/HTTPS proxy for routing traffic (set via
HTTPS_PROXY
orHTTP_PROXY
) - LLM Gateway: A service that handles authentication and provides provider-compatible endpoints (set via
ANTHROPIC_BASE_URL
,ANTHROPIC_BEDROCK_BASE_URL
, orANTHROPIC_VERTEX_BASE_URL
)
Both configurations can be used in tandem.
Using Bedrock with corporate proxy
Route Bedrock traffic through a corporate HTTP/HTTPS proxy:
Using Bedrock with LLM Gateway
Use a gateway service that provides Bedrock-compatible endpoints:
Using Vertex AI with corporate proxy
Route Vertex AI traffic through a corporate HTTP/HTTPS proxy:
Using Vertex AI with LLM Gateway
Combine Google Vertex AI models with an LLM gateway for centralized management:
Authentication configuration
Claude Code uses the ANTHROPIC_AUTH_TOKEN
for both Authorization
and Proxy-Authorization
headers when needed. The SKIP_AUTH
flags (CLAUDE_CODE_SKIP_BEDROCK_AUTH
, CLAUDE_CODE_SKIP_VERTEX_AUTH
) are used in LLM gateway scenarios where the gateway handles provider authentication.
Choosing the right integration
Consider these factors when selecting your integration approach:
Direct provider access
Best for organizations that:
- Want the simplest setup
- Have existing AWS or GCP infrastructure
- Need provider-native monitoring and compliance
Corporate proxy
Best for organizations that:
- Have existing corporate proxy requirements
- Need traffic monitoring and compliance
- Must route all traffic through specific network paths
LLM Gateway
Best for organizations that:
- Need usage tracking across teams
- Want to dynamically switch between models
- Require custom rate limiting or budgets
- Need centralized authentication management
Debugging
When debugging your third-party integration configuration:
- Use the
claude /status
slash command. This command provides observability into any applied authentication, proxy, and URL settings. - Set environment variable
export ANTHROPIC_LOG=debug
to log requests.
Next steps
- Set up Amazon Bedrock for AWS-native integration
- Configure Google Vertex AI for GCP deployment
- Implement Corporate Proxy for network requirements
- Deploy LLM Gateway for enterprise management
- Settings for configuration options and environment variables