Quickstart
This is a short tutorial on using the Coaxial Developer SDK
Prerequisite You should have installed Python (version 3.7 or higher).
For this tutorial, we will integrate Coaxial with an example vector-embedding chat application for internal company employees.
Setup your development
Currently, we provide developer clients for Python and Typescript. You can install the client with pip
:
pip install coaxial-python
or npm
:
npm i coaxial-ts --save
To use the package, import the client like so:
import coaxial
or
import { CoaxialApi } from "coaxial-ts"
API Keys/Authentication
To generate a Coaxial API Key, see the API Reference. Once you’ve generated your key, create a Coaxial API Instance:
coaxial_api = coaxial.Api(api_key="YOUR_COAXIAL_API_KEY")
Auth Integration
Let’s integrate our employees from Okta using the Okta client key and organization URL.
from coaxial.models import IntegrateOktaRequest
integrate_okta_request = IntegrateOktaRequest(
client_key="OKTA_API_KEY",
org_url="OKTA_ORGANIZATION_URL"
)
try:
coaxial_api.auth_integration.integrate_okta(integrate_okta_request)
except Exception as e:
print("Exception when Integrating Okta: %s\n" % e)
Listing Users
If the integration is successful, we’ll list our users to get their user IDs.
api_response = coaxial_api.auth_integration.list_users()
print(api_response)
#Here, we can save the User ID's for provisioning/de-provisioning later on
Data Integration
Now, let’s track the data our model ingests. Assume our application pulls vectors from two Pinecone indexes. We’ll need the Pinecone API Key and associated environment (ex. us-west1-gcp-free)
from coaxial.models import IntegratePineconeRequest
integrate_pinecone_request = IntegratePineconeRequest(
api_key="PINECONE_API_KEY",
environment="PINECONE_ENVIRONMENT"
)
try:
coaxial_api.data_integration.integrate_pinecone(
integrate_pinecone_request
)
except Exception as e:
print("Exception when Integrating Pinecone: %s\n" % e)
Listing Data Integrations
If the integration is successful, we’ll list our integrated indexes (there should be two for our example application).
api_response = coaxial_api.data_integration.list_data_integrations()
print(api_response)
# Here, we can save the data integration Coaxial ID's
# for provisioning/de-provisioning later on
Model Integration
For this LLM application, we can control the models users have access to. Assuming we’re building a standard chat interface with OpenAI, we can pull all available models from our OpenAI account.
from coaxial.models import IntegrateOpenaiRequest
integrate_openai_request = IntegrateOpenaiRequest(openai_key="OPENAI_API_KEY")
try:
coaxial_api.model_integration.integrate_openai(integrate_openai_request)
except Exception as e:
print("Exception when Integrating OpenAI: %s\n" % e)
Listing Model Integrations
If the integration is successful, we’ll list our integrated models and their respective Coaxial IDs (specifically, we should look out for the chat/embedding models).
api_response = coaxial_api.model_integration.list_model_integrations()
print(api_response)
Functions
Now, let’s say we want the model to call an OpenAI function that gives a JSON Object summary of the chat response (and only certain users can access this function). Here is how we would integrate it:
from coaxial.models import IntegrateFunctionRequest
integrate_function_request = IntegrateFunctionRequest(
function_name="summarize_chat_response",
description="This is an OpenAI function that gives a JSON Object summary of the chat response")
try:
coaxial_api.function.integrate_function(integrate_function_request)
except Exception as e:
print("Exception when Integrating Function: %s\n" % e)
Listing Functions
If the integration is successful, we’ll see the function we just integrated.
api_response = coaxial_api.function.list_functions()
print(api_response)
Access Controls
Provisioning
Now, we will provision a specific employee with access to the entire LLM application functionality. This includes both Pinecone indexes, the embedding/chat models, and OpenAI function.
from coaxial.models import GrantAccessRequest
resource_ids = [ #These IDs can also be found on the Coaxial Dashboard
"PINECONE_INDEX_1_COAXIAL_ID",
"PINECONE_INDEX_2_COAXIAL_ID",
"EMBEDDING_MODEL_COAXIAL_ID",
"CHAT_MODEL_COAXIAL_ID",
"FUNCTION_COAXIAL_ID"
]
try:
for resource in resource_ids:
grant_access_request = GrantAccessRequest(
user_id="EMPLOYEE_ID",
coaxial_id=resource
)
coaxial_api.provision.grant_access(grant_access_request)
except Exception as e:
print("Exception when granting access: %s\n" % e)
Checking Access
At any point during the LLM application life-cycle, we are able to see if a user can access the required resources. This way, if our admin wants to revoke an employee’s function access (for example) through the dashboard, the application will respond immediately.
from coaxial.models import CheckAccessForAllRequest
def check_resources(user_id):
check_access_for_all_request = CheckAccessForAllRequest(
user_id=user_id,
tags=resource_ids
)
return coaxial_api.provision.check_access_for_all(
check_access_for_all_request
)
Revoking Access
Finally, say we want automatically revoke access to the datasets for a specific user who has been misbehaving.
from coaxial.models import RevokeAccessRequest
revoke_ids = [
"PINECONE_INDEX_1_COAXIAL_ID",
"PINECONE_INDEX_2_COAXIAL_ID"
]
try:
for resource in revoke_ids:
revoke_access_request = RevokeAccessRequest(
user_id="EMPLOYEE_ID",
coaxial_id=resource
)
coaxial_api.provision.revoke_access(revoke_access_request)
except Exception as e:
print("Exception when revoking access: %s\n" % e)
print(check_resources("EMPLOYEE_ID")) #double-check access to resources
That’s the end of the Quickstart! Our example LLM application now has precise control over the functionality and data the model ingests (depending on user identity). For a more detailed overview of all the endpoints Coaxial provides (including client code examples), please see the API Integration.