OpenAI Completions Adapter
OpenAI Completions Adapter
The OpenAI Completions Adapter provides direct integration with OpenAI’s Chat Completions API by patching the openai.chat.completions.create
method. This adapter automatically injects policy guidelines into system prompts and applies outbound guardrails to responses.
Overview
Unlike the OpenAI Responses Adapter which processes response objects, the Completions Adapter intercepts the actual API calls to:
- Inject Policy Guidelines: Automatically augments system prompts with relevant policy directives
- Apply Outbound Guardrails: Evaluates and potentially blocks responses based on content policies
- Support Async Operations: Patches both sync and async completion methods
- Maintain Compatibility: Works transparently with existing OpenAI client code
Installation
pip install rizk[openai]# orpip install rizk openai
How It Works
The adapter patches the OpenAI client at runtime:
# Before patchingresponse = openai.chat.completions.create( model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hello"}])
# After Rizk initialization - same code, enhanced behaviorrizk = Rizk.init(app_name="MyApp", enabled=True)# Now includes automatic policy injection and response filteringresponse = openai.chat.completions.create( model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hello"}])
Basic Usage
Automatic Integration
The adapter activates automatically when Rizk SDK is initialized:
import openaifrom rizk.sdk import Rizk
# Initialize Rizk - this patches OpenAI automaticallyrizk = Rizk.init( app_name="OpenAI-App", api_key="your-rizk-api-key", enabled=True)
# Your existing OpenAI code works unchangedclient = openai.OpenAI(api_key="your-openai-key")
response = client.chat.completions.create( model="gpt-4", messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Explain quantum computing"} ])
print(response.choices[0].message.content)
With Custom Policies
import openaifrom rizk.sdk import Rizk
# Initialize with custom policiesrizk = Rizk.init( app_name="SecureChat", policies_path="./policies", enabled=True)
# Policy guidelines are automatically injectedresponse = openai.chat.completions.create( model="gpt-3.5-turbo", messages=[ {"role": "system", "content": "You are a customer service assistant."}, {"role": "user", "content": "Can you help me with my account?"} ])
# Response is automatically evaluated against policiesprint(response.choices[0].message.content)
Policy Injection
The adapter automatically injects policy guidelines into system prompts:
Before Injection
messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Tell me about AI safety"}]
After Injection
messages = [ { "role": "system", "content": "You are a helpful assistant.\n\nIMPORTANT POLICY DIRECTIVES:\n• Ensure all AI safety discussions are balanced and factual\n• Avoid speculation about future AI capabilities\n• Focus on current best practices and research" }, {"role": "user", "content": "Tell me about AI safety"}]
Outbound Guardrails
The adapter evaluates responses and can block inappropriate content:
import openaifrom rizk.sdk import Rizk
rizk = Rizk.init( app_name="ContentFilter", enabled=True)
# This response might be blocked if it violates policiesresponse = openai.chat.completions.create( model="gpt-3.5-turbo", messages=[ {"role": "user", "content": "Tell me something inappropriate"} ])
# If blocked, you'll get a modified responseif hasattr(response, '_rizk_blocked'): print("Response was blocked by policy") print(f"Reason: {response._rizk_block_reason}")
Async Support
The adapter automatically patches async methods:
import asyncioimport openaifrom rizk.sdk import Rizk
async def async_chat(): rizk = Rizk.init(app_name="AsyncApp", enabled=True)
# Async calls are also patched response = await openai.chat.completions.acreate( model="gpt-3.5-turbo", messages=[ {"role": "user", "content": "Hello async world!"} ] )
return response.choices[0].message.content
# Run async functionresult = asyncio.run(async_chat())print(result)
Framework Integration
With Decorators
import openaifrom rizk.sdk import Rizkfrom rizk.sdk.decorators import workflow, guardrails
rizk = Rizk.init(app_name="DecoratedApp", enabled=True)
@workflow(name="chat_completion", organization_id="demo", project_id="openai")@guardrails()def chat_with_openai(prompt: str, model: str = "gpt-3.5-turbo") -> str: """Chat with OpenAI with full monitoring and governance."""
response = openai.chat.completions.create( model=model, messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": prompt} ], max_tokens=150 )
return response.choices[0].message.content
# Usageresult = chat_with_openai("Explain machine learning in simple terms")print(result)
Error Handling
import openaifrom rizk.sdk import Rizk
rizk = Rizk.init(app_name="RobustApp", enabled=True)
def safe_chat(prompt: str) -> str: try: response = openai.chat.completions.create( model="gpt-3.5-turbo", messages=[{"role": "user", "content": prompt}], max_tokens=100 )
# Check if response was blocked if hasattr(response, '_rizk_blocked') and response._rizk_blocked: return f"Response blocked: {response._rizk_block_reason}"
return response.choices[0].message.content
except openai.RateLimitError: return "Rate limit exceeded. Please try again later." except openai.APIError as e: return f"OpenAI API error: {str(e)}" except Exception as e: return f"Unexpected error: {str(e)}"
# Usageresult = safe_chat("Tell me a joke")print(result)
Configuration
Custom Policy Paths
from rizk.sdk import Rizk
# Load policies from custom directoryrizk = Rizk.init( app_name="CustomPolicies", policies_path="/path/to/policies", enabled=True)
Disable Specific Features
from rizk.sdk import Rizk
# Disable outbound guardrails but keep policy injectionrizk = Rizk.init( app_name="PolicyOnly", enabled=True, # Custom configuration would go here)
Monitoring and Observability
The adapter automatically creates spans for all OpenAI API calls:
import openaifrom rizk.sdk import Rizk
# Enable detailed tracingrizk = Rizk.init( app_name="TracedApp", enabled=True, trace_content=True # Include request/response content in traces)
# This call will be fully tracedresponse = openai.chat.completions.create( model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hello"}])
# Traces include:# - Request parameters (model, messages, etc.)# - Response metadata (tokens used, finish reason, etc.)# - Policy decisions and injections# - Performance metrics
Best Practices
1. System Prompt Design
Design your system prompts to work well with policy injection:
# Good - Clear separation of concernssystem_prompt = """You are a financial advisor assistant.
Your role is to provide general financial guidance and education.You should always remind users to consult with qualified professionals.
Guidelines will be automatically added below this section."""
# The adapter will append policy guidelines after your content
2. Error Handling
Always check for blocked responses:
def handle_openai_response(response): # Check if response was blocked if hasattr(response, '_rizk_blocked') and response._rizk_blocked: # Handle blocked response return { "blocked": True, "reason": response._rizk_block_reason, "content": None }
# Normal response return { "blocked": False, "reason": None, "content": response.choices[0].message.content }
3. Performance Considerations
# For high-throughput applicationsrizk = Rizk.init( app_name="HighThroughput", enabled=True, # Configure for performance disable_batch=True, # Reduce latency llm_cache_size=10000 # Larger cache for repeated queries)
Troubleshooting
Common Issues
1. Policies Not Being Injected
# Check if guidelines are availablefrom rizk.sdk.guardrails.engine import GuardrailsEngine
engine = GuardrailsEngine.get_instance()guidelines = engine.get_current_guidelines()print(f"Available guidelines: {guidelines}")
2. Response Blocking Issues
# Debug response evaluationresponse = openai.chat.completions.create(...)
if hasattr(response, '_rizk_blocked'): print(f"Blocked: {response._rizk_blocked}") print(f"Reason: {response._rizk_block_reason}") print(f"Original content: {response._rizk_original_content}")
3. Import Errors
# Ensure OpenAI is installedpip install openai>=1.0.0
# Check Rizk installationpip show rizk
Debug Mode
Enable debug logging to see adapter behavior:
import loggingfrom rizk.sdk import Rizk
# Enable debug logginglogging.basicConfig(level=logging.DEBUG)logger = logging.getLogger("rizk.adapters.openai_completion")logger.setLevel(logging.DEBUG)
rizk = Rizk.init(app_name="DebugApp", enabled=True)
Advanced Usage
Custom Policy Evaluation
from rizk.sdk import Rizkfrom rizk.sdk.guardrails.types import PolicySet, Policy
# Create custom policycustom_policy = Policy( id="openai_custom", name="OpenAI Custom Policy", description="Custom rules for OpenAI interactions", action="allow", guidelines=[ "Always provide sources for factual claims", "Limit responses to 200 words maximum", "Use professional tone for business queries" ])
policy_set = PolicySet(policies=[custom_policy])
# Initialize with custom policiesrizk = Rizk.init( app_name="CustomPolicyApp", enabled=True)
# Your OpenAI calls will now use these custom policies
Integration with Other Systems
import openaifrom rizk.sdk import Rizk
# Initialize with custom telemetry endpointrizk = Rizk.init( app_name="IntegratedApp", opentelemetry_endpoint="https://your-otlp-collector.com", enabled=True)
# All OpenAI calls will send telemetry to your custom endpointresponse = openai.chat.completions.create( model="gpt-4", messages=[{"role": "user", "content": "Hello"}])
The OpenAI Completions Adapter provides seamless integration with minimal code changes, ensuring your OpenAI applications are automatically governed and monitored according to your organization’s policies.