Now, let’s track the data our model ingests. Assume our application pulls vectors from two Pinecone indexes. We’ll need the Pinecone API Key and associated environment (ex. us-west1-gcp-free)
Copy
from coaxial.models import IntegratePineconeRequestintegrate_pinecone_request = IntegratePineconeRequest( api_key="PINECONE_API_KEY", environment="PINECONE_ENVIRONMENT")try: coaxial_api.data_integration.integrate_pinecone( integrate_pinecone_request )except Exception as e: print("Exception when Integrating Pinecone: %s\n" % e)
If the integration is successful, we’ll list our integrated indexes (there should be two for our example application).
Copy
api_response = coaxial_api.data_integration.list_data_integrations()print(api_response)# Here, we can save the data integration Coaxial ID's# for provisioning/de-provisioning later on
For this LLM application, we can control the models users have access to. Assuming we’re building a standard chat interface with OpenAI, we can pull all available models from our OpenAI account.
Copy
from coaxial.models import IntegrateOpenaiRequestintegrate_openai_request = IntegrateOpenaiRequest(openai_key="OPENAI_API_KEY")try: coaxial_api.model_integration.integrate_openai(integrate_openai_request)except Exception as e: print("Exception when Integrating OpenAI: %s\n" % e)
If the integration is successful, we’ll list our integrated models and their respective Coaxial IDs (specifically, we should look out for the chat/embedding models).
Now, let’s say we want the model to call an OpenAI function that gives a JSON Object summary of the chat response (and only certain users can access this function). Here is how we would integrate it:
Copy
from coaxial.models import IntegrateFunctionRequestintegrate_function_request = IntegrateFunctionRequest( function_name="summarize_chat_response", description="This is an OpenAI function that gives a JSON Object summary of the chat response")try: coaxial_api.function.integrate_function(integrate_function_request)except Exception as e: print("Exception when Integrating Function: %s\n" % e)
Now, we will provision a specific employee with access to the entire LLM application functionality. This includes both Pinecone indexes, the embedding/chat models, and OpenAI function.
Copy
from coaxial.models import GrantAccessRequestresource_ids = [ #These IDs can also be found on the Coaxial Dashboard "PINECONE_INDEX_1_COAXIAL_ID", "PINECONE_INDEX_2_COAXIAL_ID", "EMBEDDING_MODEL_COAXIAL_ID", "CHAT_MODEL_COAXIAL_ID", "FUNCTION_COAXIAL_ID"]try: for resource in resource_ids: grant_access_request = GrantAccessRequest( user_id="EMPLOYEE_ID", coaxial_id=resource ) coaxial_api.provision.grant_access(grant_access_request)except Exception as e: print("Exception when granting access: %s\n" % e)
At any point during the LLM application life-cycle, we are able to see if a user can access the required resources.
This way, if our admin wants to revoke an employee’s function access (for example) through the dashboard, the application will respond immediately.
Finally, say we want automatically revoke access to the datasets for a specific user who has been misbehaving.
Copy
from coaxial.models import RevokeAccessRequestrevoke_ids = [ "PINECONE_INDEX_1_COAXIAL_ID", "PINECONE_INDEX_2_COAXIAL_ID"]try: for resource in revoke_ids: revoke_access_request = RevokeAccessRequest( user_id="EMPLOYEE_ID", coaxial_id=resource ) coaxial_api.provision.revoke_access(revoke_access_request)except Exception as e: print("Exception when revoking access: %s\n" % e)print(check_resources("EMPLOYEE_ID")) #double-check access to resources
That’s the end of the Quickstart! Our example LLM application now has precise control over the functionality and data the model ingests (depending on user identity).
For a more detailed overview of all the endpoints Coaxial provides (including client code examples), please see the API Integration.