Narrative

AI CommodityChain — Real-Time Market Intelligence

Commodity traders often rely on lagging reports and gut instinct because real-time structured intelligence is either expensive or fragmented.

AILLMPythonMLOpsLangChain

What Was Broken

How It Was Built

Three key decisions. First, I deployed Llama 3.3 — a 70B parameter model — locally on Linux, optimizing system resources carefully and writing automated health checks so the model stayed available. Second, I built a context injection pipeline — before every analysis request, live market prices and sentiment-tagged news were dynamically injected into the LLM context window. This is how you get accurate grounded output instead of hallucinations. Third, I built a financial analytics engine calculating a Pearson correlation matrix across 30-day price returns for global energy and metal markets — rendered dynamically, not as a static report.

What Changed

The platform gives traders a live view of market correlations, sentiment-driven context, and AI-generated analysis — all grounded in real data. 60-second end-to-end latency. The hallucination reduction from context injection is the part I am most proud of technically.

Common Questions

Two reasons — cost and data privacy. Financial data is sensitive, and sending it to a third-party API has compliance implications. Running locally also gives full control over latency and availability.
Quantization — ran a quantized version that fits within the available VRAM. I also wrote health check scripts that monitored memory and GPU utilization and would restart the model service if it degraded. The goal was unattended reliability.