Our straightforward implementation process gets you from internal function to fully-secured, globally accessible API endpoint in minutes, not months. Follow our step-by-step technical guide to unlock the power of function-native API development.
Start by deploying a new APIFront Gateway. This gives you a unique Gateway URL where all your APIs will be proxied.
https://APIFront_proxy_url/api/v1/{your_Gateway_ID}/
Your Gateway ID is a unique identifier that ensures your APIs are securely isolated from other customers. The Gateway acts as the central connection point between your internal functions and external consumers, handling authentication, routing, and monitoring.
Your API endpoints follow this structure:
https://APIFront_proxy/api/v1/{your_Gateway_ID}/{Service_Name}/{Function_Name}
Where:
Example endpoints:
APIFront_proxy/api/v1/{your_Gateway_ID}/user-service/create-user
APIFront_proxy/api/v1/{your_Gateway_ID}/user-service/update-profile
APIFront_proxy/api/v1/{your_Gateway_ID}/analytics/generate-report
APIFront_proxy/api/v1/{your_Gateway_ID}/ai-tools/process-image
Feature | Description | Benefit |
---|---|---|
📦 Logical Service Grouping | Each Service Name represents a logical service that contains multiple related functions | • Organized API structure • Clean separation of concerns • Logical function grouping |
🔄 Single Source Deployment | All functions under a single Service Name must be proxied by the same script/program | • Simplified deployment • Version consistency • Easier maintenance |
🔠 Language Consistency | Each Service Name must use a single language (Node.js, Python, or .NET) | • Optimized performance • Environment consistency • Simplified debugging |
⚖️ Automatic Load Balancing | Run multiple instances of the same service and APIFront automatically load balances incoming requests | • Horizontal scaling • High availability • Optimized performance |
🛡️ Service Integrity | Cannot split functions for the same Service Name across different scripts | • Consistent behavior • Simplified testing • Reliable performance |
This architecture enables clean service organization while providing automatic scaling capabilities without additional infrastructure.
APIFront makes it easy to expose your internal functions as secure APIs with just a few lines of code, regardless of the programming language you use.
// JavaScript Example: Exposing all user-service functions from a single service
const apifront = require('databridges-apifront-proxy');
const proxy = new apifront();
// Configure gateway connection
proxy.config({
apifront_appkey: 'YOUR_APP_KEY',
apifront_appsecret: 'YOUR_APP_SECRET',
apifront_auth_url: 'YOUR_AUTH_URL',
apifront_middlewareid: 'YOUR_MIDDLEWARE_ID'
});
// User creation function
function createUser(inparameter, response, proxyPath) {
try {
const userData = JSON.parse(inparameter);
// Your user creation logic
const result = yourUserSystem.createUser(userData);
response.end(JSON.stringify({
status: 'SUCCESS',
userId: result.userId
}));
} catch (err) {
response.exception('USER_ERROR', err.message);
}
}
// User profile update function
function updateProfile(inparameter, response, proxyPath) {
try {
const profileData = JSON.parse(inparameter);
// Your profile update logic
const result = yourUserSystem.updateProfile(profileData);
response.end(JSON.stringify({
status: 'SUCCESS',
updated: true
}));
} catch (err) {
response.exception('PROFILE_ERROR', err.message);
}
}
// All functions for "user-service" are proxied from this script
proxy.proxy('user-service/create-user', createUser);
proxy.proxy('user-service/update-profile', updateProfile);
// Start the proxy - this instance can now be load balanced by running multiple copies
proxy.start().then(() => {
console.log('User service functions are now exposed as APIs');
}).catch(err => console.error(err));
# Python Example: Exposing all analytics functions from a single service
from databridges_apifront_proxy import ApiProxy
import json
# Initialize proxy
proxy = ApiProxy()
# Configure gateway connection
proxy.config({
"apifront_appkey": "YOUR_APP_KEY",
"apifront_appsecret": "YOUR_APP_SECRET",
"apifront_auth_url": "YOUR_AUTH_URL",
"apifront_middlewareid": "YOUR_MIDDLEWARE_ID"
})
# Generate report function
async def generate_report(inparameter, response, proxy_path):
try:
params = json.loads(inparameter)
report_type = params.get("report_type")
date_range = params.get("date_range")
# Your analytics logic
report_data = your_analytics_engine.generate_report(report_type, date_range)
response.end(json.dumps({
"status": "SUCCESS",
"data": report_data
}))
except Exception as e:
response.exception("REPORT_ERROR", str(e))
# Data visualization function
async def visualize_data(inparameter, response, proxy_path):
try:
params = json.loads(inparameter)
visualization_type = params.get("type")
data_source = params.get("source")
# Your visualization logic
visualization = your_analytics_engine.create_visualization(visualization_type, data_source)
response.end(json.dumps({
"status": "SUCCESS",
"visualization": visualization
}))
except Exception as e:
response.exception("VIZ_ERROR", str(e))
# All functions for "analytics" are proxied from this script
proxy.proxy("analytics/generate-report", generate_report)
proxy.proxy("analytics/visualize-data", visualize_data)
# Start the proxy - this instance can now be load balanced by running multiple copies
proxy.start()
Key implementation notes:
Once your functions are proxied through APIFront, they become accessible to AI systems and applications via standard REST API calls.
# Python Example: LLM system using APIFront-exposed functions
from openai import OpenAI
import json
import requests
from oauth2_client import OAuth2Client
# Authentication setup
def setup_apifront_auth():
auth_client = OAuth2Client(
client_id="YOUR_CLIENT_ID",
client_secret="YOUR_CLIENT_SECRET",
token_endpoint="https://apifront.io/oauth/token"
)
token = auth_client.get_token()
return token
# Function to call any exposed API endpoint
def call_apifront_function(gateway_id, service_name, function_name, payload):
token = setup_apifront_auth()
headers = {
"Authorization": f"Bearer {token}",
"Content-Type": "application/json"
}
# Using the APIFront path structure
url = f"https://APIFront_proxy/api/v1/{gateway_id}/{service_name}/{function_name}"
response = requests.post(
url,
headers=headers,
json=payload
)
return response.json()
# LLM integration with function calling
client = OpenAI()
def process_user_query(user_input, customer_id, gateway_id):
# Use LLM to determine user intent
completion = client.chat.completions.create(
model="gpt-4",
messages=[
{"role": "system", "content": "Determine user intent and extract parameters."},
{"role": "user", "content": user_input}
],
response_format={"type": "json_object"}
)
# Parse LLM response
intent_data = json.loads(completion.choices[0].message.content)
intent = intent_data.get("intent")
# Based on intent, call appropriate service function
if intent == "user_profile":
# Call user-service API
result = call_apifront_function(
gateway_id,
"user-service", # Service name
"get-profile", # Function name
{"userID": customer_id}
)
return generate_profile_response(result, user_input)
elif intent == "analytics_report":
# Call analytics service API
report_type = intent_data.get("report_type", "summary")
result = call_apifront_function(
gateway_id,
"analytics", # Service name
"generate-report", # Function name
{
"report_type": report_type,
"date_range": "last_30_days"
}
)
return generate_report_response(result, user_input)
This example demonstrates how to:
Our team is available to help with your integration and answer any technical questions.