The gap between AI research and allocation policy is getting wider
A systematic review of AI research that actually matters for institutional allocators — week 2.
The past weeks I spent more time reading AI research than allocation reports.
That contrast is becoming interesting.
New models appear every month.
Allocation frameworks adjust slowly.
Formal policy barely moves.
Over the past weeks I searched for work that could realistically influence allocation decisions.
Not trading models.
Not product marketing.
Research that could affect Tactical Asset Allocation (short-term tilts), Strategic Asset Allocation (long-term policy weights), Capital Market Assumptions (expected return models) and ALM (asset-liability management).
Scope of the search:
1 January 2026 – 12 March 2026.
Focus on models that change inputs used by allocators, not just execution.
Four findings stood out.
First, Amundi AIP (Dynamic asset allocation: its relevance and signals for 2026)
This is not a paper but a production allocation platform.
Machine learning is used to classify macro regimes, model correlations and generate scenario-based portfolio weights.
The system distinguishes multiple historical regimes going back more than a century and uses those classifications to bias portfolio construction toward or away from risk.
The important point is not the technology.
Allocation inputs are already partly model-driven before a committee ever meets.
Second, AQR 2026 Capital Market Assumptions.
Strategic allocation still depends on expected return estimates.
The latest update shows how valuation models, macro data and long-horizon statistics are combined in systematic frameworks rather than judgement alone.
The result is not a new allocation rule, but a different starting point.
Return expectations for traditional balanced portfolios remain well below long-term historical averages, which directly affects how much risk allocators can justify taking.
Third, LSTM-based risk budgeting research (Nature).
Neural networks were used to model time-varying risk across multiple asset classes.
Compared with static risk parity, the dynamic approach reduced drawdowns during stress periods without increasing turnover.
That matters because many Strategic Asset Allocation models still assume stable correlations that disappear exactly when risk rises.
Fourth, LightGBM tail risk modelling (SSRN).
This work focuses on predicting extreme losses instead of average volatility.
The models improved detection of regime shifts before large drawdowns, allowing earlier reduction of portfolio risk.
For allocators this is directly relevant to ALM, stress testing and downside control.
Better tail estimates can change how much risk a portfolio is allowed to carry.
Taken together, the pattern is clear.
AI is not rewriting allocation policy.
It is rewriting the inputs used to justify that policy.
The sequence is predictable:
risk models change first,
return assumptions next,
tactical signals after that,
and formal allocation policy last.
For practitioners this is familiar.
Mandates do not change because a model improves.
They change when the evidence becomes too strong to ignore.
The evidence is accumulating faster than committees are moving.
Research is moving.
Frameworks are evolving.
Policy is slow.
The next question is not whether AI can improve allocation inputs.
It already does.
The real question is which of these models will be the first to influence an actual allocation decision, and what it will take before a committee is willing to follow.



