Skip to main content

Enhanced Conversions Operations Guide

Overview

This guide covers the operational aspects of managing the Enhanced Conversions worker, including deployment, monitoring, troubleshooting, and maintenance procedures.

Worker Architecture

Components

enhanced_conversions_worker.py
├── EnhancedConversionsWorker (Main class)
│ ├── get_pending_conversions() - Fetch from DB
│ ├── get_customer_data_from_cdp() - Enrich with customer info
│ ├── get_conversion_action_config() - Load credentials
│ ├── process_conversion() - Upload to Google Ads
│ ├── update_conversion_status() - Track results
│ └── update_daily_stats() - Aggregate metrics

├── EnhancedConversionsAPI (google_ads_enhanced_conversions.py)
│ ├── normalize_and_hash_email()
│ ├── normalize_and_hash_phone()
│ ├── normalize_and_hash_name()
│ ├── normalize_and_hash_address()
│ └── upload_conversion_adjustment()

└── Database Schema (enhanced_conversions)
├── uploads (Conversion queue)
├── conversion_actions (Credentials)
├── upload_stats (Daily metrics)
├── v_pending_uploads (Pending queue view)
└── v_daily_performance (Performance view)

Worker Flow

sequenceDiagram
participant W as Worker
participant DB as Database
participant CDP as CDP Data
participant G as Google Ads API

W->>DB: Fetch pending conversions (limit 100)
loop For each conversion
W->>DB: Get credentials config
W->>CDP: Get customer data
W->>W: Normalize & hash PII
W->>G: Upload conversion + hashed data
G-->>W: Success/failure response
W->>DB: Update conversion status
W->>DB: Update daily stats
end

Railway Deployment

Environment Variables

Configure the following in Railway:

# Database connection
DATABASE_URL=postgresql://user:pass@host:port/db

# Worker configuration
WORKER_MODE=enhanced_conversions
RUN_MODE=scheduler # or 'immediate' for one-time runs

# Timezone
TZ=America/Argentina/Buenos_Aires

# Optional: Dry-run mode (testing)
DRY_RUN=false

# Optional: Batch size
BATCH_SIZE=100

Service Configuration

# Create dedicated Railway service for Enhanced Conversions
railway service create enhanced-conversions-worker

# Set environment variables
railway variables --set "WORKER_MODE=enhanced_conversions"
railway variables --set "RUN_MODE=scheduler"
railway variables --set "DATABASE_URL=$DATABASE_URL"

# Deploy worker
railway up

Option 2: Integrate with Existing Worker Service

# Add to existing nerdistan-worker service
cd /path/to/nerdistan-worker

# Update main.py to support enhanced_conversions mode
# (if not already implemented)

# Set worker mode
railway service select nerdistan-worker
railway variables --set "WORKER_MODE=enhanced_conversions"
railway up

Deployment Steps

# 1. Clone worker repository
git clone https://github.com/NomadaDigital01/nerdistan-worker.git
cd nerdistan-worker

# 2. Create start_enhanced_conversions.py entry point
cat > start_enhanced_conversions.py << 'EOF'
"""
Enhanced Conversions Worker Entry Point
Processes pending conversions and uploads to Google Ads
"""

import os
import sys
from enhanced_conversions_worker import EnhancedConversionsWorker

def main():
"""Main entry point for Enhanced Conversions worker"""
run_mode = os.getenv('RUN_MODE', 'scheduler')
dry_run = os.getenv('DRY_RUN', 'false').lower() == 'true'

worker = EnhancedConversionsWorker(dry_run=dry_run)

if run_mode == 'scheduler':
print("Starting Enhanced Conversions Worker in scheduler mode...")
worker.run_scheduler() # Runs every 30 minutes
else:
print("Running Enhanced Conversions Worker immediately...")
results = worker.process_all_pending()
print(f"Processed {results['total']} conversions")
print(f"Successful: {results['successful']}")
print(f"Failed: {results['failed']}")

if __name__ == '__main__':
main()
EOF

# 3. Update requirements.txt
cat >> requirements.txt << 'EOF'
google-ads==23.1.0
schedule==1.2.0
EOF

# 4. Commit and push
git add .
git commit -m "feat: Add Enhanced Conversions worker"
git push origin main

# 5. Deploy to Railway
railway link # Select: nerdistan-worker
railway up

Verify Deployment

# Check Railway logs
railway logs --tail=100

# Expected output:
# Enhanced Conversions Worker initialized (dry_run=False)
# Starting Enhanced Conversions Worker scheduler...
# Found 45 pending conversions
# Processing conversion 123 (gclid: Cj0KCQjw...)
# Enhanced conversion uploaded successfully: Cj0KCQjw...
# Processing completed: 42/45 successful

Database Setup

Schema Initialization

-- Run schema creation (idempotent)
-- Execute in Railway PostgreSQL or via psql

-- Create schema
CREATE SCHEMA IF NOT EXISTS enhanced_conversions;

-- Create uploads table
CREATE TABLE IF NOT EXISTS enhanced_conversions.uploads (
id SERIAL PRIMARY KEY,
tenant_id INTEGER NOT NULL,
gclid VARCHAR(255) NOT NULL,
conversion_action_id VARCHAR(50) NOT NULL,
conversion_date_time TIMESTAMP WITH TIME ZONE NOT NULL,
conversion_value DECIMAL(10,2) NOT NULL,
currency_code VARCHAR(3) NOT NULL,
order_id VARCHAR(255),

-- Customer data (hashed)
hashed_email VARCHAR(64),
hashed_phone VARCHAR(64),
hashed_first_name VARCHAR(64),
hashed_last_name VARCHAR(64),

-- Upload status
upload_status VARCHAR(20) NOT NULL DEFAULT 'pending',
upload_attempts INTEGER DEFAULT 0,
uploaded_at TIMESTAMP WITH TIME ZONE,
error_message TEXT,

-- Metadata
created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,

CONSTRAINT unique_conversion UNIQUE (tenant_id, gclid, order_id)
);

-- Create indexes
CREATE INDEX IF NOT EXISTS idx_uploads_pending
ON enhanced_conversions.uploads(tenant_id, upload_status)
WHERE upload_status = 'pending';

CREATE INDEX IF NOT EXISTS idx_uploads_tenant_date
ON enhanced_conversions.uploads(tenant_id, conversion_date_time);

-- Create conversion_actions table
CREATE TABLE IF NOT EXISTS enhanced_conversions.conversion_actions (
id SERIAL PRIMARY KEY,
tenant_id INTEGER NOT NULL,
conversion_action_id VARCHAR(50) NOT NULL,
conversion_action_name VARCHAR(255) NOT NULL,
conversion_category VARCHAR(50),
is_active BOOLEAN DEFAULT true,

-- Google Ads credentials (encrypted)
developer_token VARCHAR(255) NOT NULL,
client_id VARCHAR(255) NOT NULL,
client_secret VARCHAR(255) NOT NULL,
refresh_token TEXT NOT NULL,
login_customer_id VARCHAR(20),
ads_customer_id VARCHAR(20) NOT NULL,

created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,

CONSTRAINT unique_tenant_action UNIQUE (tenant_id, conversion_action_id)
);

-- Create upload_stats table
CREATE TABLE IF NOT EXISTS enhanced_conversions.upload_stats (
id SERIAL PRIMARY KEY,
tenant_id INTEGER NOT NULL,
date DATE NOT NULL,
total_conversions INTEGER DEFAULT 0,
successful_uploads INTEGER DEFAULT 0,
failed_uploads INTEGER DEFAULT 0,
total_conversion_value DECIMAL(12,2) DEFAULT 0,
avg_match_rate DECIMAL(5,2),

created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,

CONSTRAINT unique_tenant_date UNIQUE (tenant_id, date)
);

-- Create views
CREATE OR REPLACE VIEW enhanced_conversions.v_pending_uploads AS
SELECT
tenant_id,
COUNT(*) as pending_count,
SUM(conversion_value) as pending_value,
MIN(created_at) as oldest_pending,
MAX(created_at) as newest_pending
FROM enhanced_conversions.uploads
WHERE upload_status = 'pending'
GROUP BY tenant_id;

CREATE OR REPLACE VIEW enhanced_conversions.v_daily_performance AS
SELECT
u.tenant_id,
DATE(u.created_at) as upload_date,
COUNT(*) as total_uploads,
SUM(CASE WHEN u.upload_status = 'success' THEN 1 ELSE 0 END) as successful,
SUM(CASE WHEN u.upload_status = 'failed' THEN 1 ELSE 0 END) as failed,
ROUND(100.0 * SUM(CASE WHEN u.upload_status = 'success' THEN 1 ELSE 0 END) / COUNT(*), 2) as success_rate,
SUM(u.conversion_value) as total_value
FROM enhanced_conversions.uploads u
GROUP BY u.tenant_id, DATE(u.created_at)
ORDER BY upload_date DESC;

-- Create queue function
CREATE OR REPLACE FUNCTION enhanced_conversions.queue_conversion(
p_tenant_id INTEGER,
p_gclid VARCHAR(255),
p_conversion_action_id VARCHAR(50),
p_conversion_date_time TIMESTAMP WITH TIME ZONE,
p_conversion_value DECIMAL(10,2),
p_currency_code VARCHAR(3),
p_order_id VARCHAR(255),
p_customer_data JSONB
) RETURNS INTEGER AS $$
DECLARE
v_upload_id INTEGER;
BEGIN
INSERT INTO enhanced_conversions.uploads (
tenant_id,
gclid,
conversion_action_id,
conversion_date_time,
conversion_value,
currency_code,
order_id,
hashed_email,
hashed_phone,
hashed_first_name,
hashed_last_name,
upload_status
) VALUES (
p_tenant_id,
p_gclid,
p_conversion_action_id,
p_conversion_date_time,
p_conversion_value,
p_currency_code,
p_order_id,
p_customer_data->>'hashed_email',
p_customer_data->>'hashed_phone',
p_customer_data->>'hashed_first_name',
p_customer_data->>'hashed_last_name',
'pending'
)
ON CONFLICT (tenant_id, gclid, order_id) DO NOTHING
RETURNING id INTO v_upload_id;

RETURN v_upload_id;
END;
$$ LANGUAGE plpgsql;

Configure Tenant Credentials

-- Add conversion action for a tenant
INSERT INTO enhanced_conversions.conversion_actions (
tenant_id,
conversion_action_id,
conversion_action_name,
conversion_category,
is_active,
developer_token,
client_id,
client_secret,
refresh_token,
login_customer_id,
ads_customer_id
) VALUES (
56, -- Chelsea IO - Exit
'987654321',
'Purchase',
'PURCHASE',
true,
'YOUR_DEVELOPER_TOKEN',
'YOUR_CLIENT_ID.apps.googleusercontent.com',
'YOUR_CLIENT_SECRET',
'YOUR_REFRESH_TOKEN',
NULL, -- Optional MCC account
'123-456-7890'
)
ON CONFLICT (tenant_id, conversion_action_id) DO UPDATE SET
is_active = EXCLUDED.is_active,
developer_token = EXCLUDED.developer_token,
client_id = EXCLUDED.client_id,
client_secret = EXCLUDED.client_secret,
refresh_token = EXCLUDED.refresh_token,
ads_customer_id = EXCLUDED.ads_customer_id,
updated_at = CURRENT_TIMESTAMP;

Monitoring

Real-Time Monitoring Queries

Pending Conversions Queue

-- View pending conversions by tenant
SELECT * FROM enhanced_conversions.v_pending_uploads;

-- Example output:
-- tenant_id | pending_count | pending_value | oldest_pending | newest_pending
-- ----------|---------------|---------------|---------------------|-------------------
-- 56 | 23 | 345000.00 | 2024-10-08 08:15:00 | 2024-10-08 09:45:00
-- 52 | 12 | 178500.00 | 2024-10-08 08:30:00 | 2024-10-08 09:30:00

Recent Uploads Status

-- Last 50 uploads with status
SELECT
id,
tenant_id,
gclid,
conversion_value,
currency_code,
upload_status,
upload_attempts,
uploaded_at,
CASE
WHEN error_message IS NOT NULL THEN LEFT(error_message, 100)
ELSE NULL
END as error_summary
FROM enhanced_conversions.uploads
ORDER BY created_at DESC
LIMIT 50;

Success Rate by Tenant

-- Upload success rate last 7 days
SELECT
tenant_id,
COUNT(*) as total_uploads,
SUM(CASE WHEN upload_status = 'success' THEN 1 ELSE 0 END) as successful,
SUM(CASE WHEN upload_status = 'failed' THEN 1 ELSE 0 END) as failed,
SUM(CASE WHEN upload_status = 'pending' THEN 1 ELSE 0 END) as pending,
ROUND(100.0 * SUM(CASE WHEN upload_status = 'success' THEN 1 ELSE 0 END) / COUNT(*), 2) as success_rate_pct
FROM enhanced_conversions.uploads
WHERE created_at >= CURRENT_DATE - INTERVAL '7 days'
GROUP BY tenant_id
ORDER BY total_uploads DESC;
-- Daily performance for last 30 days
SELECT *
FROM enhanced_conversions.v_daily_performance
WHERE upload_date >= CURRENT_DATE - INTERVAL '30 days'
AND tenant_id = 56
ORDER BY upload_date DESC;

Monitoring Dashboard SQL

-- Complete monitoring dashboard query
WITH pending_summary AS (
SELECT
tenant_id,
COUNT(*) as pending_count,
SUM(conversion_value) as pending_value
FROM enhanced_conversions.uploads
WHERE upload_status = 'pending'
GROUP BY tenant_id
),
recent_performance AS (
SELECT
tenant_id,
COUNT(*) as total_last_24h,
SUM(CASE WHEN upload_status = 'success' THEN 1 ELSE 0 END) as success_last_24h,
ROUND(100.0 * SUM(CASE WHEN upload_status = 'success' THEN 1 ELSE 0 END) / COUNT(*), 2) as success_rate_24h
FROM enhanced_conversions.uploads
WHERE created_at >= NOW() - INTERVAL '24 hours'
GROUP BY tenant_id
),
weekly_stats AS (
SELECT
tenant_id,
SUM(total_conversions) as total_week,
SUM(successful_uploads) as success_week,
AVG(avg_match_rate) as avg_match_rate_week
FROM enhanced_conversions.upload_stats
WHERE date >= CURRENT_DATE - INTERVAL '7 days'
GROUP BY tenant_id
)
SELECT
ca.tenant_id,
ca.conversion_action_name,
ca.ads_customer_id,
ca.is_active,
COALESCE(ps.pending_count, 0) as pending_conversions,
COALESCE(ps.pending_value, 0) as pending_value,
COALESCE(rp.total_last_24h, 0) as uploads_24h,
COALESCE(rp.success_rate_24h, 0) as success_rate_24h,
COALESCE(ws.total_week, 0) as uploads_7d,
COALESCE(ws.avg_match_rate_week, 0) as avg_match_rate_7d
FROM enhanced_conversions.conversion_actions ca
LEFT JOIN pending_summary ps ON ca.tenant_id = ps.tenant_id
LEFT JOIN recent_performance rp ON ca.tenant_id = rp.tenant_id
LEFT JOIN weekly_stats ws ON ca.tenant_id = ws.tenant_id
WHERE ca.is_active = true
ORDER BY ca.tenant_id;

Railway Monitoring

# View live logs
railway logs --tail=100 --service=enhanced-conversions-worker

# Filter for errors only
railway logs --filter="error" --tail=50

# Export logs to file
railway logs --tail=1000 > enhanced_conversions_logs.txt

# Check worker status
railway status --service=enhanced-conversions-worker

Worker Schedule

Default Schedule

The worker runs every 30 minutes when deployed in scheduler mode:

# In enhanced_conversions_worker.py
schedule.every(30).minutes.do(self.process_all_pending)

Customizing Schedule

Option 1: Modify Worker Code

# Change schedule interval
schedule.every(15).minutes.do(self.process_all_pending) # Every 15 minutes
schedule.every(1).hours.do(self.process_all_pending) # Every hour
schedule.every().day.at("10:00").do(self.process_all_pending) # Daily at 10 AM
// railway.json
{
"build": {
"builder": "NIXPACKS"
},
"deploy": {
"startCommand": "python start_enhanced_conversions.py",
"restartPolicyType": "ON_FAILURE",
"restartPolicyMaxRetries": 3
},
"cron": [
{
"schedule": "*/30 * * * *",
"command": "python enhanced_conversions_worker.py --batch-size 100"
}
]
}

Error Handling and Retry Logic

Automatic Retry

The worker automatically retries failed uploads up to 3 times:

# In worker query
WHERE upload_status = 'pending'
AND upload_attempts < 3

Retry Logic Flow

  1. First attempt (upload_attempts = 0):

    • Try to upload conversion
    • If fails: status stays 'pending', increment attempts
  2. Second attempt (upload_attempts = 1):

    • Retry after 30 minutes
    • If fails: status stays 'pending', increment attempts
  3. Third attempt (upload_attempts = 2):

    • Final retry
    • If fails: status changes to 'failed', increment attempts
  4. After 3 attempts (upload_attempts >= 3):

    • Conversion marked as 'failed'
    • Excluded from future processing
    • Manual intervention required

Manual Retry

-- Reset failed conversions for retry
UPDATE enhanced_conversions.uploads
SET upload_status = 'pending',
upload_attempts = 0,
error_message = NULL,
updated_at = CURRENT_TIMESTAMP
WHERE upload_status = 'failed'
AND tenant_id = 56
AND created_at >= CURRENT_DATE - INTERVAL '7 days';

Inspect Failed Conversions

-- View all failed conversions with error details
SELECT
id,
tenant_id,
gclid,
order_id,
conversion_value,
upload_attempts,
error_message,
created_at
FROM enhanced_conversions.uploads
WHERE upload_status = 'failed'
ORDER BY created_at DESC;

-- Count failures by error type
SELECT
tenant_id,
LEFT(error_message, 50) as error_type,
COUNT(*) as failure_count
FROM enhanced_conversions.uploads
WHERE upload_status = 'failed'
GROUP BY tenant_id, LEFT(error_message, 50)
ORDER BY failure_count DESC;

CDP Data Requirements

Required Fields

The worker requires the following customer data from CDP:

FieldRequiredSourceExample
emailYescdp.customerscustomer@example.com
phoneNocdp.customers+5491112345678
first_nameNocdp.customersMaría
last_nameNocdp.customersGonzález
streetNocdp.customersAv. Corrientes 1234
cityNocdp.customersBuenos Aires
stateNocdp.customersCABA
postal_codeNocdp.customersC1043
countryNocdp.customersAR

Data Enrichment Query

-- Query used by worker to fetch customer data
SELECT
c.email,
c.phone,
c.first_name,
c.last_name,
c.street,
c.city,
c.state,
c.postal_code,
c.country
FROM cdp.customers c
INNER JOIN vtex_orders o ON o.email = c.email
WHERE o.tenant_id = %s
AND o.order_id = %s
LIMIT 1;

Data Quality Check

-- Check data completeness for Enhanced Conversions
SELECT
tenant_id,
COUNT(*) as total_customers,
COUNT(email) as has_email,
COUNT(phone) as has_phone,
COUNT(CASE WHEN first_name IS NOT NULL AND last_name IS NOT NULL THEN 1 END) as has_full_name,
COUNT(CASE WHEN street IS NOT NULL AND city IS NOT NULL THEN 1 END) as has_address,
ROUND(100.0 * COUNT(email) / COUNT(*), 2) as email_rate,
ROUND(100.0 * COUNT(phone) / COUNT(*), 2) as phone_rate,
ROUND(100.0 * COUNT(CASE WHEN first_name IS NOT NULL AND last_name IS NOT NULL THEN 1 END) / COUNT(*), 2) as name_rate
FROM cdp.customers
WHERE tenant_id IN (
SELECT DISTINCT tenant_id
FROM enhanced_conversions.conversion_actions
WHERE is_active = true
)
GROUP BY tenant_id
ORDER BY total_customers DESC;

Testing

Dry-Run Mode

Test the worker without actually uploading to Google Ads:

# Local testing
python enhanced_conversions_worker.py --dry-run

# Railway testing
railway variables --set "DRY_RUN=true"
railway up
railway logs --tail=100

# Expected output:
# [DRY RUN] Would upload conversion Cj0KCQjw... with customer data: customer@example.com
# Processing completed: 45/45 successful (dry run)

Single Tenant Testing

# Test specific tenant only
python enhanced_conversions_worker.py --tenant-id 56 --batch-size 10

Manual Conversion Queue

-- Queue a test conversion
SELECT enhanced_conversions.queue_conversion(
p_tenant_id := 56,
p_gclid := 'TEST_GCLID_' || EXTRACT(EPOCH FROM NOW())::TEXT,
p_conversion_action_id := '987654321',
p_conversion_date_time := NOW(),
p_conversion_value := 1000.00,
p_currency_code := 'ARS',
p_order_id := 'TEST_ORDER_' || EXTRACT(EPOCH FROM NOW())::TEXT,
p_customer_data := jsonb_build_object(
'hashed_email', encode(sha256('test@example.com'::bytea), 'hex'),
'hashed_phone', encode(sha256('+5491112345678'::bytea), 'hex')
)
);

-- Verify it's queued
SELECT * FROM enhanced_conversions.uploads
WHERE gclid LIKE 'TEST_GCLID_%'
ORDER BY created_at DESC
LIMIT 5;

Performance Metrics

Worker Performance

-- Average processing time per conversion (estimate based on logs)
-- Typical: 100-200ms per conversion
-- Batch of 100: 10-20 seconds total

-- Worker throughput capacity
SELECT
DATE_TRUNC('hour', uploaded_at) as hour,
COUNT(*) as conversions_uploaded,
COUNT(DISTINCT tenant_id) as tenants_processed,
ROUND(AVG(conversion_value), 2) as avg_conversion_value
FROM enhanced_conversions.uploads
WHERE uploaded_at >= NOW() - INTERVAL '24 hours'
AND upload_status = 'success'
GROUP BY DATE_TRUNC('hour', uploaded_at)
ORDER BY hour DESC;
  • Daily operations: 15,000 operations/day (default)
  • Enhanced conversions: Each upload counts as 1 operation
  • Rate limiting: Built into worker (processes sequentially)
-- Check if approaching daily quota
SELECT
tenant_id,
COUNT(*) as uploads_today,
15000 - COUNT(*) as remaining_quota_estimate
FROM enhanced_conversions.uploads
WHERE DATE(uploaded_at) = CURRENT_DATE
AND upload_status = 'success'
GROUP BY tenant_id;

Maintenance Procedures

Weekly Maintenance

-- 1. Archive old conversions (keep last 90 days)
CREATE TABLE IF NOT EXISTS enhanced_conversions.uploads_archive (
LIKE enhanced_conversions.uploads INCLUDING ALL
);

INSERT INTO enhanced_conversions.uploads_archive
SELECT * FROM enhanced_conversions.uploads
WHERE created_at < CURRENT_DATE - INTERVAL '90 days';

DELETE FROM enhanced_conversions.uploads
WHERE created_at < CURRENT_DATE - INTERVAL '90 days';

-- 2. Vacuum tables
VACUUM ANALYZE enhanced_conversions.uploads;
VACUUM ANALYZE enhanced_conversions.upload_stats;

-- 3. Check index health
SELECT
schemaname,
tablename,
indexname,
idx_scan,
idx_tup_read,
idx_tup_fetch
FROM pg_stat_user_indexes
WHERE schemaname = 'enhanced_conversions'
ORDER BY idx_scan DESC;

Monthly Maintenance

-- 1. Regenerate daily statistics (if needed)
TRUNCATE enhanced_conversions.upload_stats;

INSERT INTO enhanced_conversions.upload_stats (
tenant_id,
date,
total_conversions,
successful_uploads,
failed_uploads,
total_conversion_value
)
SELECT
tenant_id,
DATE(conversion_date_time) as date,
COUNT(*) as total_conversions,
SUM(CASE WHEN upload_status = 'success' THEN 1 ELSE 0 END) as successful_uploads,
SUM(CASE WHEN upload_status = 'failed' THEN 1 ELSE 0 END) as failed_uploads,
SUM(conversion_value) as total_conversion_value
FROM enhanced_conversions.uploads
WHERE conversion_date_time >= CURRENT_DATE - INTERVAL '365 days'
GROUP BY tenant_id, DATE(conversion_date_time);

-- 2. Update Google Ads refresh tokens (if needed)
-- (Tokens expire after 6 months of inactivity)

Credential Rotation

-- Update credentials for a tenant
UPDATE enhanced_conversions.conversion_actions
SET
developer_token = 'NEW_TOKEN',
client_id = 'NEW_CLIENT_ID',
client_secret = 'NEW_CLIENT_SECRET',
refresh_token = 'NEW_REFRESH_TOKEN',
updated_at = CURRENT_TIMESTAMP
WHERE tenant_id = 56
AND conversion_action_id = '987654321';

Alerts and Notifications

Alert Conditions

Set up monitoring alerts for:

  1. High Failure Rate: > 20% failures in last hour
  2. Pending Queue Buildup: > 500 pending conversions
  3. No Uploads: No successful uploads in last 2 hours
  4. Authentication Errors: Repeated auth failures
  5. Worker Down: No logs in last 1 hour

Alert Queries

-- Alert: High failure rate
SELECT
tenant_id,
COUNT(*) as uploads_last_hour,
SUM(CASE WHEN upload_status = 'failed' THEN 1 ELSE 0 END) as failures,
ROUND(100.0 * SUM(CASE WHEN upload_status = 'failed' THEN 1 ELSE 0 END) / COUNT(*), 2) as failure_rate
FROM enhanced_conversions.uploads
WHERE created_at >= NOW() - INTERVAL '1 hour'
GROUP BY tenant_id
HAVING ROUND(100.0 * SUM(CASE WHEN upload_status = 'failed' THEN 1 ELSE 0 END) / COUNT(*), 2) > 20;

-- Alert: Pending queue buildup
SELECT *
FROM enhanced_conversions.v_pending_uploads
WHERE pending_count > 500;

-- Alert: No recent uploads
SELECT
ca.tenant_id,
ca.conversion_action_name,
MAX(u.uploaded_at) as last_upload,
NOW() - MAX(u.uploaded_at) as time_since_last_upload
FROM enhanced_conversions.conversion_actions ca
LEFT JOIN enhanced_conversions.uploads u
ON ca.tenant_id = u.tenant_id
AND u.upload_status = 'success'
WHERE ca.is_active = true
GROUP BY ca.tenant_id, ca.conversion_action_name
HAVING NOW() - MAX(u.uploaded_at) > INTERVAL '2 hours';

Troubleshooting Guide

Worker Not Processing

Symptoms: No logs, no uploads

Checks:

# 1. Check Railway service status
railway status --service=enhanced-conversions-worker

# 2. Check logs for errors
railway logs --tail=100 --filter="error"

# 3. Check DATABASE_URL is set
railway variables | grep DATABASE_URL

# 4. Check worker mode
railway variables | grep WORKER_MODE

Solution:

# Restart service
railway restart --service=enhanced-conversions-worker

# Redeploy if needed
railway up

Authentication Failures

Symptoms: "AUTHENTICATION_ERROR" in logs

Checks:

-- Check credentials are configured
SELECT
tenant_id,
conversion_action_name,
ads_customer_id,
CASE WHEN refresh_token IS NOT NULL THEN 'SET' ELSE 'MISSING' END as token_status
FROM enhanced_conversions.conversion_actions
WHERE is_active = true;

Solution:

# Regenerate refresh token
python generate_refresh_token.py

# Update in database
# (See Credential Rotation section)

Conversions Stuck in Pending

Symptoms: Conversions remain pending after 3 attempts

Checks:

-- Find stuck conversions
SELECT *
FROM enhanced_conversions.uploads
WHERE upload_status = 'pending'
AND upload_attempts >= 3
AND created_at < NOW() - INTERVAL '2 hours';

Solution:

-- Mark as failed (for investigation)
UPDATE enhanced_conversions.uploads
SET upload_status = 'failed',
error_message = 'Manual review required - stuck after 3 attempts',
updated_at = CURRENT_TIMESTAMP
WHERE upload_status = 'pending'
AND upload_attempts >= 3
AND created_at < NOW() - INTERVAL '2 hours';

Low Match Rates

Symptoms: Google Ads shows < 60% match rate

Checks:

-- Check data quality
SELECT
tenant_id,
COUNT(*) as total_uploads,
COUNT(hashed_email) as has_email,
COUNT(hashed_phone) as has_phone,
COUNT(hashed_first_name) as has_first_name,
ROUND(100.0 * COUNT(hashed_phone) / COUNT(*), 2) as phone_coverage
FROM enhanced_conversions.uploads
WHERE upload_status = 'success'
AND uploaded_at >= CURRENT_DATE - INTERVAL '7 days'
GROUP BY tenant_id;

Solution:

  • Improve CDP data quality (see CDP Data Requirements)
  • Add more customer data fields (phone, name, address)
  • Verify data normalization is correct

Last updated: October 8, 2025 Version: 1.0.0 Author: Miguel Angel Hernandez Status: Production-ready