Large language models are now part of every quant team. We separate the genuinely useful from the demoware and walk through what the MCF curriculum covers.
Every bank and hedge fund now has at least one LLM project in production. Most are doing the same handful of things: summarising research, parsing earnings calls, generating SQL from natural-language queries, and producing first-pass analyst memos. A few are doing more interesting work — agent-based research workflows, retrieval-augmented credit analysis, and structured-data extraction from regulatory filings.
What we teach
The MCF curriculum covers the practitioner end. Students learn how to wire up retrieval-augmented generation against a finance corpus, how to evaluate an LLM-augmented signal against a baseline, and where the failure modes are (hallucinated numbers, drift over time, jailbreak risk in production deployment).
What we don't teach
We don't spend much time on model training from scratch. That work is for a small number of frontier-lab teams, not for the working quant. The skill we optimise for is *applied integration*: knowing when an LLM beats a classical model, when it doesn't, and how to ship the thing past compliance.
If a candidate can wire an LLM into an existing pipeline, evaluate its lift over a baseline, and discuss the failure modes — they're hireable as a quant in 2026.