Blenra LogoBlenra
Optimized for: Gemini / ChatGPT / Claude
#Observability

Integrating Grafana Tempo with Prometheus for Exemplars

Customize the variables below to instantly engineer your prompt.

Required Variables

grafana-tempo-prometheus-exemplars-integration.txt
Act as a Full-Stack Observability Engineer. Architect a flawless integration of Prometheus Exemplars, designed to instantly link high-latency metrics directly to their corresponding, granular distributed traces within Grafana Tempo for the [MICROSERVICE_NAME] application. Explain the exact code-level configuration required within the OpenTelemetry SDK (Java/Go) to programmatically inject the [TRACE_ID_KEY] directly into the Prometheus metric payload at a highly tuned [SAMPLING_RATE]. Provide the precise Prometheus `scrape_config` parameters required to explicitly enable exemplar storage. Finally, detail the exact configuration of the Grafana 'Data Source' settings required to generate the UI overlay, allowing an engineer to click a single spike on a latency graph and instantly jump to the exact timeline waterfall of the failing trace.

Example Text Output

"A configuration guide for linking metrics and traces, including the Java/Go code snippets for exemplar injection."

More Cloud & DevOps Prompts

View all →

Frequently Asked Questions

What is the "Integrating Grafana Tempo with Prometheus for Exemplars" prompt used for?

A configuration guide for linking metrics and traces, including the Java/Go code snippets for exemplar injection.

Which AI tools work with this prompt?

This prompt is optimized for Gemini / ChatGPT / Claude, but works great with ChatGPT, Claude, Gemini, and other large language models. Simply copy it and paste it into your preferred AI tool.

How do I customize this prompt?

Use the variable fields above to fill in your specific details. The prompt will auto-update as you type, ready to copy instantly.

Is this prompt free?

Yes! All prompts on Blenra are free to copy and use immediately. No account required.