Skip to main content

Trace without setting environment variables

As mentioned in other guides, the following environment variables allow you to configure tracing enabled, the api endpoint, the api key, and the tracing project:

  • LANGCHAIN_TRACING_V2
  • LANGCHAIN_API_KEY
  • LANGCHAIN_ENDPOINT
  • LANGCHAIN_PROJECT

In some environments, it is not possible to set environment variables. In these cases, you can set the tracing configuration programmatically.

Recently changed behavior

Due to a number of asks for finer-grained control of tracing using the trace context manager, we changed the behavior of with trace to honor the LANGCHAIN_TRACING_V2 environment variable in version 0.1.95 of the Python SDK. You can find more details in the release notes. The recommended way to disable/enable tracing without setting environment variables is to use the with tracing_context context manager, as shown in the example below.

The recommended way to do this in Python is to use the tracing_context context manager. This works for both code annotated with traceable and code within the trace context manager.

import openai
from langsmith import Client, tracing_context, traceable
from langsmith.wrappers import wrap_openai

langsmith_client = Client(
api_key="YOUR_LANGSMITH_API_KEY", # This can be retrieved from a secrets manager
api_url="https://api.smith.langchain.com", # Update appropriately for self-hosted installations or the EU region
)

client = wrap_openai(openai.Client())

@traceable(run_type="tool", name="Retrieve Context")
def my_tool(question: str) -> str:
return "During this morning's meeting, we solved all world conflict."

@traceable
def chat_pipeline(question: str):
context = my_tool(question)
messages = [
{ "role": "system", "content": "You are a helpful assistant. Please respond to the user's request only based on the given context." },
{ "role": "user", "content": f"Question: {question}
Context: {context}"}
]
chat_completion = client.chat.completions.create(
model="gpt-4o-mini", messages=messages
)
return chat_completion.choices[0].message.content

# Can set to False to disable tracing here without changing code structure
with tracing_context(enabled=True):
# Use langsmith_extra to pass in a custom client
chat_pipeline("Can you summarize this morning's meetings?", langsmith_extra={"client": langsmith_client})

Was this page helpful?


You can leave detailed feedback on GitHub.