Projects Playground Blog Timeline Get in touch
Writing & Experiments

Notebooks, deep dives
& things I figured out.

Where I document experiments, share failures, and write about building AI systems that actually work in production.

Filter
๐Ÿ“Œ Pinned
BosWeigh โ€” weighing a cow with one photo, and why it's harder than it looks

Monocular depth estimation + five anatomical keypoints + a 150-year-old livestock formula. The pipeline runs end-to-end. The numbers say I've still got work to do. An honest, mid-flight write-up of a research project โ€” including the negative Rยฒ I'm trying to fix.

Apr 20, 2026 ยท 9 min read ยท โ— New
Computer Vision DepthPro Experiment WIP
Read post โ†’
Notebook preview
Final_Weight_estimator.ipynb
[1]
depth = DepthPro(image) # metric depth, m
[2]
kpts = yolo.predict(image).keypoints
## integrate 3D arc along girth line
[3]
arc = np.linalg.norm(np.diff(pts3d, 0), axis=1).sum()
[4]
w = (length_in * girth_in**2) / 660
Out
Estimated Weight (kg): 258.57
Out
[ 3D arc length chart ]
All posts โ€” 14 total
Sort: newest first โ†“
GPT-2 from scratch โ€” understanding attention by writing it

Rebuilt the 124M GPT-2 stack line-by-line in PyTorch โ€” multi-head attention, GELU, transformer blocks โ€” the cleanest way I've found to actually understand attention. With every important code snippet and the six things that only landed once I wrote it.

gpt2.ipynb
[1]
class MultiHeadAttention(nn.Module):
## causal mask + scaled dot-product
[2]
attn = softmax(Q @ K.T / sqrt(d_k))
[3]
model = GPTModel(GPT_CONFIG_124M)
Out
params: 124M โœ“
Fine-tuning LLaMA 3 on agricultural domain data โ€” what 10 failed runs taught me

I tried to get a 7B model to understand cattle health, crop cycles, and Kannada farming terminology. Here's exactly what went wrong and what eventually worked โ€” with full loss curves and prompting strategies.

llama_agri_finetune.ipynb
[1]
model = load_pretrained("llama-3-8b")
[2]
trainer.train(resume_from_checkpoint=True)
Out
Epoch 3/5 โ€” val_loss: 0.89 โ†’ 0.42 โœ“
When YOLO fails in the real world: edge cases in production CV pipelines

Low-light, motion blur, and occlusion โ€” the failure modes benchmarks never cover, and how I found them in production.

yolo_edge_cases.ipynb
[1]
yolo.predict(img, conf=0.25)
Out
0 detections โ€” night-time failure
[2]
analyze_failures(results_df)
Building a RAG pipeline that doesn't hallucinate: lessons from GauSwastha

How we combined retrieval with structured livestock data to get reliable, explainable outputs from an LLM.

rag_gauswastha.ipynb
[1]
retriever = FAISSRetriever(docs)
Out
Top-3 retrieved: [doc_1, doc_8, doc_12]
[2]
chain.run(query, context)
Markowitz optimization with live market data in Python

A step-by-step walkthrough of the mean-variance optimizer I built โ€” constraints, edge cases, and the math behind it.

markowitz_optimizer.ipynb
[1]
weights = optimize(returns, cov)
Out
Sharpe: 1.42 | Sortino: 1.87 โœ“
[2]
plot_efficient_frontier()