Strategy Advisor
Conversational Intelligence Layer for Investment Analytics
Prepared for Nahel Rachet & Antonin Dechaine — April 2026
Your Platform Today
A comprehensive analysis of your existing analytical engine
Module Architecture
Cascade Pattern
Foundation identifies markets, downstream modules fan out per market. A single run generates dozens of LLM calls.
Data Architecture
Config-driven prompt templates
Execution lifecycle tracking
Wide-table analysis results
Per-call cost and token logging
Key Design Patterns
Config-Driven Prompts
Prompts stored in database, not hardcoded. Version-controlled and editable without deployments.
Multi-Provider Routing
Claude and OpenAI switching via if-node logic. Provider selection per prompt, not per deployment.
Wide Table Outputs
One row per run, one column per module. Flat structure enables simple queries across all analysis types.
Cost Tracking
Per-execution token and cost logging. Full observability into LLM spend by module, market, and provider.
What's Missing
The conversational layer. There is no way for users to interact with analysis results through natural language. No orchestration triggered from conversation. No chat interface. The Strategy Advisor fills this gap — turning static outputs into an interactive analytical partner.
The Strategy Advisor — Architecture
How the conversational intelligence layer works
Core Capabilities
Request Lifecycle
Three-Layer State Model
Separated because they change at different rates and serve different query patterns.
Analytical Outputs
EXISTSprompt_outputs
Module results from completed workflows. Stable once written. Queried by SA to answer user questions.
Write-once, read-many
Orchestration State
NEWNew table
Workflow triggers, execution status, idempotency keys. Changes constantly as workflows start and complete.
High-frequency writes
Conversation State
NEWNew table
Messages per run per session. Grows linearly as users interact. Includes role, content, and tool calls.
Append-only, linear growth
The SA Tool Set
Retrieval Strategy
Async Execution Model
Key point: User never waits in silence.
System Prompt Architecture
Backend Architecture — Two Options
We present both approaches with our recommendation
Option A: Node.js / TypeScript Service
RECOMMENDEDAdvantages
- Same language as existing stack (Next.js, n8n)
- Richest LLM SDK ecosystem
- Maintainable by future team
- Provider switching out of the box
- Streaming and tool-use handled natively
Trade-offs
- Less raw performance than compiled language
Option B: Go Service
Advantages
- Superior performance and concurrency
- Clean compiled binary
- 16 years engineering depth (language-agnostic)
Trade-offs
- Immature LLM SDK ecosystem
- Adds new language to codebase
- Harder to maintain without Go expertise
- Streaming + tool-use requires significant custom code
Why We Recommend Option A
The Strategy Advisor is I/O bound — waiting on LLM APIs and database queries — not compute-bound. TypeScript's ecosystem advantage for LLM applications outweighs Go's performance advantage for this specific use case. Your future team can read, debug, and extend TypeScript. The AI SDK handles provider switching, streaming, and tool-use loops that would require months of custom development in Go.
Workflow Engine Strategy
Pragmatic recommendation for your orchestration layer
Your n8n workflows work. 52 sub-workflows across 5 module groups are running, producing quality output. That's not something to throw away.
Stay on n8n for Phase 1-2. Build SA with a workflow abstraction layer.
- 01Don't rebuild 52 workflows before proving SA works — Validate the conversation layer first.
- 02Abstraction layer makes SA engine-agnostic — SA calls triggerWorkflow(), your code decides whether that hits n8n, Inngest, or Temporal.
- 03Migrate when you need to, not before — If n8n hits scaling limits in production, swap the engine behind the abstraction without touching SA code.
Serverless, event-driven. Good fit if you move to Vercel. Moderate migration.
Enterprise-grade durability. Overkill for current scale. Complex setup.
Developer-friendly, TypeScript-native. Closest to n8n in simplicity.
Failure Mode Catalog
18 identified risks with prevention strategies — because production reliability matters
Evaluation & Testing Strategy
How we prove the system works — not just in demos, but in production
Conversation Quality
- Multi-turn test scripts (10+ scenarios covering different user intents)
- Tone evaluation: does SA sound like a senior strategy advisor?
- Appropriate confidence levels: admits uncertainty when evidence is thin
Retrieval Quality
- Precision: does SA retrieve the right outputs for each question?
- Recall: does it miss relevant outputs?
- Measured across runs with varying numbers of completed modules
Routing Quality
- Decision accuracy: answer directly vs. trigger workflow vs. retrieve output
- Evaluated on a labeled test set of user questions with expected routing decisions
- Edge cases: ambiguous questions, questions about missing data
Deliverable Quality
- Structured evaluation against reference outputs
- Coverage: does the deliverable use all relevant source material?
- Accuracy: are cited facts traceable to source modules?
Reliability Testing
- Partial-state scenarios (some modules complete, others pending/failed)
- Overlapping request handling
- Long session tests (20+ turns with progressive context growth)
- Provider failover tests
Integration Testing
- SA to Supabase (state reads/writes)
- SA to n8n (webhook triggers and callbacks)
- End-to-end flow: user message through to streamed response with tool use
Per-Phase Testing Milestones
Delivery & Timeline
Five phases, each independently valuable
Architecture document detailing SA state model, tool design, retrieval strategy, and system prompt architecture. Plus a working conversation prototype connected to your Supabase that reads existing prompt_outputs and answers questions about completed analyses.
Free — our commitment to proving the approach before you invest
UI/UX design system and wireframes, conversation engine with streaming, three-layer state model in Supabase, manifest retrieval, selective output loading, basic chat UI implementation, system prompt design and iterative tuning, project setup and CI/CD
Workflow triggering via n8n webhooks, status tracking, async UX (conversation continues during background work), duplicate prevention with idempotency, dependency management, workflow engine abstraction layer
Deliverable generation (IC memo, discussion guide, diligence scoping), evidence sufficiency checks, pgvector semantic search setup, output embedding pipeline, conversation summarization, chat UI completion
Full test suite (unit tests for all 57 L2s, integration tests, AI quality evaluation), failure recovery, monitoring setup (Sentry + uptime), chat UI polish, load testing, documentation
$25/hr x 618 hours = $15,450. The 40-hour POC is free. We show manual hours (896) alongside AI-accelerated hours (618) in the scope section — you see exactly where acceleration applies and where it doesn't. The 17-week calendar timeline accounts for review cycles, feedback incorporation, and access dependencies between phases — 618 working hours spread across 17 calendar weeks.
Phase 0: Free POC — Statement of Work
40 hours, zero cost — what you get and how we prove the approach
Validate the Strategy Advisor architecture with a working prototype before either party commits to the full engagement. You evaluate our technical approach and code quality. We map your system deeply enough to give accurate estimates.
What We Deliver
Architecture Document
- Three-layer state model design (analytical outputs, orchestration state, conversation state)
- SA tool set specification (get_manifest, read_output, trigger_workflow, check_status, generate_deliverable)
- Retrieval strategy design (manifest, selective, semantic search)
- System prompt architecture (base rules, dynamic injection, grounding strategy)
- Async execution model (streaming, webhook callbacks, notification flow)
- Failure mode catalog with prevention strategies
- Recommended tech stack with rationale
- Data model for new Supabase tables (schemas, indexes, RLS policies)
- Acceptance criteria for the full engagement — aligned with your business goals, user expectations, and measurable quality benchmarks
Working Conversation Prototype
- Node.js/TypeScript service connected to your Supabase instance
- Reads existing prompt_outputs for any completed run
- Builds manifest from populated output columns
- Answers questions about completed analyses using selective retrieval
- Streams responses via SSE
- Basic system prompt with investment advisor tone
- Deployed and accessible for testing
What's NOT in the POC
- Workflow triggering (no n8n integration yet)
- Deliverable generation
- Semantic search (pgvector)
- Production chat UI (prototype uses minimal interface)
- Multi-provider switching (single provider for POC)
- Conversation persistence (sessions are ephemeral in POC)
What We Need From You
- Read access to your Supabase instance (prompt_outputs, workflow_runs tables)
- One completed run_id with populated module outputs for testing
- 30-minute walkthrough of the n8n workflow structure
- Access to a shared repository for code delivery
Acceptance Criteria
- 01Architecture document reviewed and discussed
- 02Prototype can read outputs from a real run and answer questions about the analysis
- 03Response quality is evaluated on 5+ test questions
- 04Both parties align on whether the full engagement makes sense
Timeline
Start within 48 hours of acceptance. Architecture document delivered in 3-4 days. Working prototype delivered by end of week 1. Review call to discuss findings and decide on next steps.
Detailed Scope
Every component, every hour — fully transparent
AI Acceleration
We use Claude Code, Cursor, and automated testing tools to accelerate delivery. Manual hours represent traditional development time. AI-accelerated hours reflect our actual delivery pace. Average acceleration: 1.4x across the project. Factors range from 1.0x (human-only tasks like UAT) to 2.5x (boilerplate, standard CRUD).
Test Coverage
Every L2 component in the scope has a corresponding test allocation of approximately 2 hours manual / 1 hour AI-accelerated. Backend L2s receive unit tests covering core logic, edge cases, and error handling. Frontend L2s receive component tests covering rendering, interaction, and state management. Tests are grouped by module in the scope table for readability — the per-L2 commitment is reflected in the grouped hours (e.g., ‘Conversation engine tests (6 L2s)’ = 6 L2s × 2h = 12h manual).
SA Backend Service
426h manual → 313h with AI
Chat UI Frontend
174h manual → 117h with AI
Testing & Evaluation
230h manual → 153h with AI
Infrastructure & Setup
66h manual → 35h with AI
Grand Total
Technology Stack
Proven tools, minimal new infrastructure
Node.js / TypeScript
SA BackendHono or NestJS. Independent service, same language as your existing stack.
Why: Your existing stack is TypeScript (Next.js, n8n). Same language means your future team can read, debug, and extend the SA codebase. AI SDK ecosystem is richest in TypeScript. Alternatives considered: Go (better raw performance but immature LLM SDKs, adds new language), Python (LangChain ecosystem but doesn't match your stack).
Vercel AI SDK
LLM AbstractionProvider switching (Anthropic, OpenAI, Google), streaming, tool-use loops — all handled natively.
Why: Thin SDK that handles provider switching, streaming, and multi-round tool-use without forcing an opinionated framework. Alternatives considered: LangChain.js (too many abstractions that conflict with our custom state model), LangGraph.js (interesting for agents but adds complexity we don't need in Phase 1), raw API calls (too much boilerplate for streaming + tool loops).
Supabase / PostgreSQL
Database (existing)Your existing database. We add 2 new tables for orchestration and conversation state. No migration needed.
Why: Already in your stack with proven data. Adding 2 tables is trivial. Row-Level Security for multi-tenant isolation when you go to production. Alternatives considered: separate PostgreSQL (unnecessary migration), MongoDB (wrong data model for structured analytical outputs).
pgvector
Semantic SearchPostgreSQL extension, enabled in Supabase with one command. No separate vector database required.
Why: Runs as a PostgreSQL extension inside your existing Supabase — no separate vector database to manage, no data sync pipeline. Alternatives considered: Pinecone (managed but adds external dependency and data residency complexity), Qdrant (powerful but overkill at current scale), Weaviate (same concern).
Server-Sent Events
StreamingReal-time token streaming to the chat UI. Native browser support, no WebSocket complexity.
Why: One-directional streaming from server to client — exactly what token-by-token LLM responses need. Native browser support, no library required on the frontend. Alternatives considered: WebSockets (bidirectional but unnecessary complexity for this use case, harder to debug), HTTP polling (too much latency for real-time streaming).
n8n
Workflow Engine (existing)Your 52 sub-workflows stay as-is. SA triggers via webhook API, receives completion callbacks.
Why: Your 52 workflows already work. Rebuilding them would burn months with zero user value. We build SA with an abstraction layer so n8n can be swapped later without rewriting the SA. Alternatives considered: Inngest, Temporal, Trigger.dev (all valid for production scale, but migrating before SA is proven is premature).
Next.js
Chat UIChat interface built as a new Next.js application. Your existing Lovable mockup provides the design direction.
Why: Next.js is your chosen frontend framework. The chat UI is a new application we build from scratch, informed by your Lovable mockup. Alternatives considered: standalone React app (unnecessary if you plan to extend the platform), Vue/Svelte (wrong ecosystem for your stack).
Docker
DeploymentSA backend containerized for EU deployment. Any cloud provider, any region.
Why: SA backend runs as an independent service that needs EU deployment flexibility. Docker gives you any cloud provider, any region, consistent environments. Alternatives considered: Vercel serverless (connection limits for long-running LLM calls and SSE streams), bare metal (no portability).
System Architecture
Strategy Advisor service integration with existing analytical engine
No New Infrastructure
Your core infrastructure stays untouched (Supabase, n8n, rendering pipeline). We build two new components — the SA backend service and the Chat UI — and extend your existing Supabase with pgvector and two new tables. No new databases, no new hosting providers, no new frameworks.
Fast-Follow & Future Stack
Technologies we recommend adopting after the core SA is proven
Redis
Fast FollowIn-memory cache for session state, rate limiting, and real-time presence.
When: Adopt when concurrent sessions exceed ~50. Supabase handles state at current scale. Redis adds operational complexity that isn't justified until you have paying users generating concurrent load.
BullMQ / Message Queue
Phase 5+Job queue for reliable async workflow execution with retries and priorities.
When: Adopt when migrating off n8n to custom orchestration. n8n currently handles queueing and retries. BullMQ (backed by Redis) or Inngest become relevant when you need programmatic control over workflow execution order, retry policies, and dead-letter handling.
LangSmith / Eval Pipeline
Fast FollowLLM observability platform for tracing, evaluating, and debugging AI interactions.
When: Adopt after Phase 2 when the SA is handling real conversations. During Phase 1-2, our custom evaluation suite (golden tests, retrieval scoring) is sufficient. LangSmith or Braintrust add value when you need production-scale tracing across hundreds of conversations.
Push Notifications (FCM/APNs)
Phase 5+Browser and mobile push notifications for workflow completion alerts.
When: Adopt when users need to be notified outside the chat interface. In-chat notifications are included in the core scope. Push notifications require a notification service, user preference management, and potentially a service worker — worth it only when users leave the tab while workflows run.
PWA Capabilities
Phase 5+Progressive Web App: offline support, install prompt, background sync.
When: Adopt when mobile usage is significant. The SA chat UI works on mobile browsers today. PWA adds install-to-homescreen and offline queuing — valuable for field analysts doing due diligence, but not needed until you validate the core product.
Inngest / Temporal
When NeededProduction-grade workflow orchestration with durable execution and built-in retries.
When: Adopt when n8n hits scaling limits — concurrent execution bottlenecks, complex retry requirements, or programmatic workflow composition. The SA's workflow abstraction layer is designed for this migration. Trigger: when n8n execution failures exceed 2% or queue latency exceeds 30 seconds consistently.
Your Team
One senior engineer, full ownership

Abhishek Sharma
Lead Engineer
16+ years in software engineering
Go, Node.js/NestJS, React, Next.js, PostgreSQL
Architecture, conversation engine, orchestration layer, state model design
All phases, personally leading every deliverable
Additional frontend and QA resources are available for parallel work if the timeline needs compression. The core architecture and conversation engine will always be Abhishek-led.
Working Cadence
Agreement
What's included, what's not, and what happens next
Included
- Full source code — all repositories, you own everything
- SA conversation service (backend)
- Chat UI (frontend, built as a new Next.js application)
- Three-layer state model (2 new Supabase tables + integration with existing prompt_outputs)
- Orchestration layer with n8n webhook integration
- Retrieval layer with semantic search (pgvector)
- Deliverable generation flows (IC memo, discussion guide, diligence scoping)
- System prompt design and iterative tuning
- UI/UX design system (colors, typography, spacing, component library, wireframes)
- Automated test suite (unit tests for all 57 L2s, integration tests, AI quality evaluation)
- Monitoring setup (Sentry error tracking, uptime monitoring, alerting rules)
- Architecture documentation
- Weekly progress updates and code access from day 1
Not Included
- Modifications to existing n8n workflows or analytical module prompts
- Full migration to Inngest/Temporal (abstraction layer only — migration-ready)
- LLM API costs (Anthropic, OpenAI usage fees)
- Supabase hosting costs
- Content or template design for deliverables
- Mobile application
- Post-launch feature development (available via retainer engagement)
- Redis caching layer (recommended as fast-follow if concurrent sessions exceed ~50)
- Push notifications and email alerts (in-chat notifications are included)
- PWA (progressive web app) capabilities
- n8n to custom orchestration migration (abstraction layer included, full migration is a separate engagement)
Next Steps
Review this proposal and the architecture approach
Accept the 40-hour free POC — no commitment beyond that
Share Supabase read access and n8n webhook documentation
We start within 48 hours of acceptance
POC deliverable in ~1 week: architecture document + working conversation prototype
Review together, decide on full engagement
Questions?
Reach out at abhishek@fordelstudios.com — we'll walk through anything that needs clarification.
Document Version 1.0 — April 2026 — Confidential
Fordel Studios Development Pvt. Ltd.
