Auto-Instrumentation
The SDK can automatically instrument supported LLM libraries:Copy
from value import initialize_sync, auto_instrument
client = initialize_sync(agent_secret="your_agent_secret")
auto_instrument() # Instruments all available libraries
Google Generative AI (Gemini)
Copy
pip install value-python[genai]
Copy
from value import initialize_sync, auto_instrument
from google import genai
client = initialize_sync(agent_secret="your_agent_secret")
auto_instrument(["gemini"])
# Calls are automatically traced
gemini_client = genai.Client(api_key="your-key")
response = gemini_client.models.generate_content(
model="gemini-2.5-flash",
contents=["Write a haiku"]
)
LangChain
Copy
pip install value-python[langchain]
Copy
from value import initialize_sync, auto_instrument
from langchain_openai import ChatOpenAI
client = initialize_sync(agent_secret="your_agent_secret")
auto_instrument(["langchain"])
# LangChain calls are automatically traced
llm = ChatOpenAI(model="gpt-4")
response = llm.invoke("Hello")
Manual Instrumentation
For custom logic, use action contexts:Copy
from value import initialize_sync
client = initialize_sync(agent_secret="your_agent_secret")
def my_agent_function(user_input):
with client.action_context(user_id="user_123", anonymous_id="session_abc") as ctx:
# Your agent logic
result = process(user_input)
ctx.send(
action_name="agent_execution",
**{
"value.action.description": "Processed user request",
"input_length": len(user_input),
}
)
return result
Check Available Libraries
Copy
from value import get_supported_libraries, is_library_available
# List all supported libraries
print(get_supported_libraries()) # ['gemini', 'langchain']
# Check if a library's instrumentation is installed
print(is_library_available("gemini")) # True/False