The new compliance stack for primary research, mapped
Post-Capvision and post-SEC, expert networks have rebuilt the engagement workflow around MNPI detection at three checkpoints. Here is what the stack actually looks like.

Expert networks now run compliance as a three-stage pipeline on every engagement: pre-call screening, in-call moderation, and post-call transcript review. The shift was forced by the 2023 Capvision raid in Shanghai and a wave of SEC enforcement actions against platforms in 2024, and it is now baked into how the major networks operate. For buy-side clients, the practical effect is that any consultancy without a documented compliance program is increasingly uncallable.
The legal spine has not changed. The Dirks v. SEC (1983) tipper-tippee framework still governs whether a chain of disclosure is actionable, and SEC v. Martoma remains the modern reference point for what an aggressive prosecution of expert-network channels looks like. What changed is the operational layer: networks have moved from policy documents to enforced workflow, and a new vendor stack has formed around it.
The three checkpoints, in order
Pre-call screening. Before the call is scheduled, the network verifies the expert's current employer, confirms the engagement falls outside the employer's trade window, and runs a structured questionnaire on what the expert can and cannot discuss. Dialectica's compliance module is the named example in the source; similar tooling exists internally at Guidepoint and Third Bridge. The questionnaire output is logged and retained.
In-call moderation. A trained moderator joins the call with the explicit job of cutting in if the expert drifts toward material non-public information. This was always the theoretical role of the moderator; what changed is that real-time MNPI detection is now a documented, auditable function rather than a soft norm. AI-assisted tooling (CompliAI, Smarsh) is being layered on top to flag risk language during the call, though the moderator remains the decision-maker.
Post-call transcript review. The transcript is reviewed for content that should be redacted before delivery to the client. Guidepoint and Third Bridge run automatic redaction on the post-call transcript according to the source. The redaction layer is where the compliance program leaves a paper trail that holds up in front of an investigations team.
Networks have moved from policy documents to enforced workflow.
Why the stack hardened now
Two things happened in close succession. The Capvision raid signaled that cross-border primary research carries political and regulatory risk that does not respect the buy-side's compliance comfort. Several networks paused or restructured China coverage in the months after. Then 2024 SEC enforcement actions against US-facing expert network platforms made it clear that the agency is willing to test the tipper-tippee chain through the network layer, not just at the end-investor.
The practical response from compliance teams at hedge funds and long-onlys was to tighten the gate on which networks they will engage. The question "does the consultancy have a stated compliance program" is not new. What is new is that it is now load-bearing. Networks that cannot answer it in writing are dropped from the approved-vendor list before the sourcing conversation starts.
Where the vendor stack sits
Three categories worth tracking:
- AI-assisted MNPI detection. CompliAI and Smarsh are the named vendors. The pitch is real-time risk flagging during the call, with the moderator retaining final judgment. Adoption is uneven and the technology is still being calibrated against false-positive rates that would make the moderator's job harder, not easier.
- Structured pre-call questionnaires. Dialectica's compliance module is the example. The product is less about technology than about workflow enforcement: the questionnaire is mandatory, logged, and reviewable.
- Automatic redaction. Used by Guidepoint and Third Bridge per the source. The redaction layer connects directly to the transcript pipeline, which is where transcription quality and turnaround become a compliance input rather than a convenience feature.
The last point is the one buy-side compliance teams underweight. A redaction process is only as good as the transcript it operates on. If the transcript is delayed, lossy, or inconsistently formatted, the post-call review checkpoint becomes the weakest link in the stack.
What to watch next
Three open questions for the next two to three quarters: whether SEC enforcement extends further up the chain into network operators directly; whether AI-assisted MNPI detection moves from flagging to autonomous redaction (and whether buy-side compliance teams accept that); and whether China-facing primary research rebuilds with a hardened compliance layer or stays paused.
Powering institutional-grade transcription for expert networks.
INFLXD provides AI-powered, human-edited transcription with sub-1% error rates for the world's leading expert networks and financial research firms.
Visit inflxd.com →Continue reading.

Why AI Transcription Still Hasn't Replaced Human Review in Finance
Word error rates are below 5%. Every serious financial workflow still keeps a human in the loop. The gap is structural, not cosmetic.

Expert network pricing holds the $1,000 to $1,500 line as procurement tightens
Hourly rates have barely moved despite supply expansion and client consolidation. The pressure is shifting to contract structure, not unit price.

Why finance transcripts fail where it matters most
Standard ASR engines quote 5-12% word error rates on conversational benchmarks. On a 60-minute expert call, that translates to 50-80 material errors clustered in the tokens analysts actually use.