ClarityX Research Institute

White Paper

Compounding Analytical Intelligence

How an AI Investment System Teaches Itself to Become an Expert

Parson Tang — March 2026


What If Intelligence Could Compound?

In investing, the most powerful force is compounding. A dollar becomes two, then four, then sixteen -- not through any single brilliant move, but through relentless accumulation over time.

Most AI tools in finance cannot do this. They reason well but do not accumulate. They do not track whether they were right. They do not learn which approaches work in which conditions. Every session starts from zero.

I designed and built a system that resolves this -- one where analytical intelligence compounds the way capital does. Every analysis teaches the system something. Every outcome refines its judgment. Over time, it develops genuine expertise across every domain in finance: self-learning, cross-referencing across domains at a scale no human can match, and getting measurably smarter with each cycle. This paper describes the architecture.


The Foundation: Specialized Agents

The system -- MARY (Multi-Agent Reasoning for You) -- uses four specialized agents rather than a single general-purpose model. Each masters its domain and shares intelligence with the others. This matters because each agent develops its own expertise, and cross-agent patterns emerge that no single-domain system could discover.

AgentDomainWhat It Learns Over Time
MacroEconomy & RegimesWhich indicators best predict regime transitions; scenario forecast accuracy
FundamentalCompany AnalysisSector-specific valuation patterns; which data matters most per industry
PortfolioConstruction & RiskWhich allocation tilts outperform in which regimes; risk model calibration
Alpha LabOpportunistic SignalsSignal reliability by market condition; screening accuracy

The architecture is modular. New agents -- credit, alternatives, private markets -- can be added and immediately bootstrapped from the existing knowledge base. A new agent inherits the shared intelligence layer from day one.

The agents are the foundation. What makes the system genuinely different is what happens after each analysis.


The Learning Architecture

Most AI investment tools ship the model and call it done. I inverted this: the model is the starting point. The product is the system that teaches itself to become an expert.

Five stages, each building on the last. The first three require no machine learning -- they deliver meaningful advantage using structured logging and basic statistics alone.

   ANALYZE  --->  RECORD  --->  TRACK  --->  RECOGNIZE  --->  OPTIMIZE
   (Perform       (Log every    (Compare     (Find           (Train models,
    analysis)      analysis)     to actual    patterns in     adjust own
                                 outcomes)    accuracy)       behavior)
      ^                                                          |
      +----------------------------------------------------------+
                    Every cycle compounds expertise

Stage 1: The Analyst's Notebook -- Structured Knowledge Capture

Every analysis is logged as a structured record: what was asked, what data was available, what was missing, what the system concluded, how confident it was, what the macro regime was at the time.

This is not logging for debugging. This is logging for learning. Every entry is a future training sample.

A junior analyst's first day, the portfolio manager says: "Write down everything. Not because your notes matter today -- but because in a year, you'll see patterns in your own thinking that you can't see now."


Stage 2: The Scorecard -- Outcome Tracking

When reality arrives -- earnings, price movements, regime transitions -- the system records what actually happened and links it to the original analysis. Every entry gets a label: the system said X, reality was Y.

The difference between a portfolio manager who says "I'm good at this" and one who says "Here are 500 calls I made. Here's my hit rate by sector, by regime, by confidence level." Institutional investors allocate capital to track records, not confidence.

This is where the system becomes auditable in a way no human analyst is.


Stage 3: The Pattern -- Statistical Recognition, No ML Required

With enough labeled entries, patterns emerge from pure data analysis. No machine learning needed.

  • "91% directionally accurate on mega-cap tech during expansion, but only 58% on small-cap biotech during transitions."
  • "When macro regime uncertainty combines with rising leverage in consumer discretionary -- that preceded sector underperformance in 8 of 10 instances."
  • "Confidence scores above 0.85 correlate with 88% accuracy. Below 0.65, accuracy drops to 52%."
  • "Analyses with complete SEC filing data are 23% more accurate than those without."

A doctor who has treated 10,000 patients does not need a textbook to recognize patterns. The experience is the pattern recognition. The journal gives the system that experience at scale.

This is where genuine sector expertise emerges -- not programmed by an engineer, but discovered from the system's own history. What works, what does not, and critically, why something works in one scenario but fails in another. Not blind application of rules. Intelligence refined by experience.

Stages 1 through 3 require zero machine learning. Most of the practical value lives here. ML and RL are the ceiling, not the floor.


Stage 4: The Model -- Machine Learning

With labeled training data, the system trains predictive models. Given this data profile, this regime, this sector: what is the likely outcome?

The system begins to outperform its own prompts -- calibrated prediction informed by thousands of prior analyses, across more dimensions than any human mind can hold simultaneously.

An experienced fund manager has intuition: "This setup reminds me of 2018." That is pattern matching on experience. Machine learning gives the system the computational equivalent -- except across its entire history, all at once.


Stage 5: The Optimizer -- Reinforcement Learning

The system adjusts its own behavior based on outcomes:

  • Routes harder questions to different analytical approaches
  • Calibrates confidence empirically -- not a default number, but "historically, at this confidence level, I was right X% of the time"
  • Flags when it is operating outside its demonstrated competence
  • Weights data sources based on which ones improved accuracy in which contexts

A senior portfolio manager does not just analyze better -- they know what they do not know. They have learned to learn.


The compounding thesis: Each stage builds on the previous. No shortcuts -- Stage 5 requires the data from Stages 1 through 4. A system that has analyzed 10,000 companies across three market cycles has pattern recognition that a new entrant cannot replicate by throwing more compute at the problem. The data flywheel is the moat.

Stage 1 is built and in production. Stage 2 is in active development. Stages 3 through 5 are the designed trajectory. I make this distinction because intellectual honesty about what is built versus what is planned is itself a signal of engineering rigor.


The Multiplier: Cross-Domain Intelligence

Single-domain pattern recognition is valuable. Cross-domain pattern recognition is where this system becomes something no individual tool -- or individual human -- can match.

No person can simultaneously absorb macroeconomic indicators, central bank communications, company filings across hundreds of firms, sentiment data, technical signals, and portfolio risk metrics -- all cross-referenced against what actually worked in similar conditions. The volume is too large, the cross-references too numerous, the history too vast for one mind.

When each agent accumulates expertise and shares context, compound patterns emerge:

  • Macro detects late-cycle transition + Fundamental flags rising leverage in consumer discretionary = that combination preceded sector underperformance in 8 of 10 instances. The Fundamental Agent learns to weight cash flow over growth metrics in these regimes -- because historically, growth-weighted analyses were 34% less accurate.

  • Fundamental flags improving margins in industrials + macro regime is early expansion = historically a strong overweight signal for the Portfolio Agent.

  • Alpha Lab momentum signals are 78% reliable in trending regimes but only 41% during transitions -- a pattern visible only when cross-referenced with the Macro Agent's regime classification.

A CIO at a top fund holds macro, fundamental, portfolio, and tactical views simultaneously, looking for where they reinforce or contradict each other. Cross-domain learning is the computational version of that synthesis -- except it processes more data, across more dimensions, and never forgets.

Year 1: Pattern recognition begins within individual domains. Year 2: Cross-agent patterns emerge. Year 3+: The system develops the kind of multi-dimensional expertise that takes a human analyst decades to build.


Trust Architecture

The learning architecture does not just make the system smarter. It makes it honest. A system that tracks every analysis and compares it to outcomes cannot hide behind confident language.

Evidence first, then conclusion. Every number traces to a verified source -- SEC EDGAR filings, central bank and government data feeds, institutional-grade market data APIs, sentiment aggregation across news and survey sources. When data is missing, the system says so explicitly.

Measurable accuracy. The system's accuracy is not a claim -- it is a statistic. By sector, by regime, by confidence level.

Calibrated confidence. "85% confident" means "historically, when I said 85%, I was right 85% of the time." The opposite of general-purpose AI, where confidence is a linguistic choice, not a statistical measurement.

Auditability. A compliance officer can reconstruct exactly what data the system had and how it reached its conclusion.

Engineering discipline. Over 1,600 automated tests. Production-grade, not a prototype.

The difference between a fund manager who says "trust me" and one who hands you a verified, audited track record. The learning architecture produces the equivalent of an audited track record -- for an AI system.


The Moat

Just as compound returns create wealth that linear returns cannot match, compound analytical intelligence creates expertise that static systems cannot replicate.

Data flywheel. Every analysis feeds the journal. Every outcome creates a label. Every pattern informs the next analysis. You can copy the architecture. You cannot copy the accumulated knowledge.

Cross-cycle learning. Expertise is forged across market cycles, not within one. A system that has analyzed companies through expansion, contraction, and transition has learned things a calm-market system has never encountered.

Extensibility. New agents multiply the system rather than fragmenting it. Every new domain adds cross-referencing power to every existing domain.

Practitioner design. Designed by someone who sat in the seat -- 20 years at Goldman Sachs, J.P. Morgan, and Credit Suisse. The architecture reflects constraints only practitioners understand: what data actually matters, what "accuracy" means in a portfolio context, the difference between sounding right and being right.

The head start compounds. That is what makes it a moat.


An Invitation

The companion to this paper -- "The Agentic Private Bank" -- described a design for relationship intelligence and capital acquisition. That paper said: I understand the business. This paper says: I built a system that learns.

If your organization is evaluating how AI should evolve beyond static tools -- from systems that sound intelligent to systems that compound intelligence -- I welcome the conversation.

Parson Tang clarityxresearch@gmail.com


Parson Tang is the founder of ClarityX Research Institute and the architect of MARY, a production multi-agent investment intelligence system. He has over 20 years of experience in asset management and private banking, including roles at Goldman Sachs, J.P. Morgan, and Credit Suisse. He holds a Computer Science degree from the University of Southern California and an MBA from the University of Oxford. His analytical work is published at clarityxinstitute.com.