Quickstart
This is a short tutorial on using the Coaxial Developer SDK
Prerequisite You should have installed Python (version 3.7 or higher).
For this tutorial, we will integrate Coaxial with an example vector-embedding chat application for internal company employees.
Setup your development
Currently, we provide developer clients for Python and Typescript. You can install the client with pip
:
or npm
:
To use the package, import the client like so:
or
API Keys/Authentication
To generate a Coaxial API Key, see the API Reference. Once you’ve generated your key, create a Coaxial API Instance:
Auth Integration
Let’s integrate our employees from Okta using the Okta client key and organization URL.
Listing Users
If the integration is successful, we’ll list our users to get their user IDs.
Data Integration
Now, let’s track the data our model ingests. Assume our application pulls vectors from two Pinecone indexes. We’ll need the Pinecone API Key and associated environment (ex. us-west1-gcp-free)
Listing Data Integrations
If the integration is successful, we’ll list our integrated indexes (there should be two for our example application).
Model Integration
For this LLM application, we can control the models users have access to. Assuming we’re building a standard chat interface with OpenAI, we can pull all available models from our OpenAI account.
Listing Model Integrations
If the integration is successful, we’ll list our integrated models and their respective Coaxial IDs (specifically, we should look out for the chat/embedding models).
Functions
Now, let’s say we want the model to call an OpenAI function that gives a JSON Object summary of the chat response (and only certain users can access this function). Here is how we would integrate it:
Listing Functions
If the integration is successful, we’ll see the function we just integrated.
Access Controls
Provisioning
Now, we will provision a specific employee with access to the entire LLM application functionality. This includes both Pinecone indexes, the embedding/chat models, and OpenAI function.
Checking Access
At any point during the LLM application life-cycle, we are able to see if a user can access the required resources. This way, if our admin wants to revoke an employee’s function access (for example) through the dashboard, the application will respond immediately.
Revoking Access
Finally, say we want automatically revoke access to the datasets for a specific user who has been misbehaving.
That’s the end of the Quickstart! Our example LLM application now has precise control over the functionality and data the model ingests (depending on user identity). For a more detailed overview of all the endpoints Coaxial provides (including client code examples), please see the API Integration.