Commodity traders deal with a lot of noise — news, price movements, macro events — and most tools either give you raw data with no interpretation, or AI summaries that hallucinate because they're not grounded in live data. I wanted to build something that bridged that gap.
The project had four main components, each with a deliberate design decision behind it.
I ran Llama 3.3 — a 70B parameter model — locally on Linux. Running it locally was not the easy path, but it meant no data privacy concerns and no per-call API costs. I used quantization so the model fit within available memory, and scripted automated health checks so if the model service degraded, it restarted without manual intervention.
#!/bin/bash
# LLM Health Check — runs every 2 minutes
RESPONSE=$(curl -sf \
--max-time 10 \
http://localhost:11434/api/health)
if [ $? -ne 0 ]; then
echo "LLM service down — restarting"
systemctl restart ollama
notify_oncall "LLM auto-restarted at $(date)"
fi
# Check memory utilization
MEM=$(free | awk '/Mem/{printf "%.0f", $3/$2*100}')
if [ $MEM -gt 90 ]; then
notify_oncall "Memory critical: ${MEM}%"
fiThis is the most important piece technically. Instead of just asking the LLM a question, every request dynamically injects live market prices and sentiment-tagged news into the context window. The model is always reasoning from current data, not its training knowledge. That is how you eliminate hallucinations in financial analysis — you do not trust the model's memory, you give it fresh facts every time.
async def build_context(query: str) -> str:
# Pull live market data
prices = await fetch_live_prices(
markets=['gold', 'crude_oil', 'natural_gas']
)
# Pull relevant news (last 2 hours)
news = await fetch_sentiment_news(
limit=10,
min_impact='medium'
)
# Select most relevant context
# (token budget: 2000 tokens)
context = select_relevant(
prices=prices,
news=news,
query=query,
max_tokens=2000
)
return f"""
LIVE MARKET DATA (as of {datetime.now()}):
{context.prices}
RECENT NEWS (sentiment-tagged):
{context.news}
QUERY: {query}
"""Built a financial analytics module that calculates a Pearson correlation matrix of 30-day daily price returns across global energy and metal markets — rendered dynamically. This gives traders a quantitative view of how markets are moving together or diverging, not just gut feel.
import pandas as pd
import numpy as np
def calculate_correlation_matrix(
markets: list[str],
window_days: int = 30
) -> pd.DataFrame:
# Fetch 30-day price history
price_data = {}
for market in markets:
prices = fetch_price_history(market, window_days)
# Calculate daily returns
price_data[market] = pd.Series(prices).pct_change()
df = pd.DataFrame(price_data).dropna()
# Pearson correlation matrix
corr_matrix = df.corr(method='pearson')
return corr_matrixPulls global commodity headlines in real-time, runs sentiment tagging, and classifies market impact so traders see signal, not just headlines. Pipeline runs continuously with 60-second data latency as the design target.
End-to-end latency of 60 seconds — from market event to platform analysis. Context injection eliminated hallucinations that make most LLM-based financial tools unreliable.
"This is the project I would point to as the clearest example of building AI that is actually useful, not just impressive-looking. The difference is grounding."