The problem, before.
A marketplace is, in truth, two products wearing one coat. There is a buyer product — a storefront, a cart, a checkout — and a seller product — inventory, payouts, fulfilment. They share only a ledger and a vocabulary.
Before
The founders needed both products at once, with a mobile-first bias for buyers. Off-the-shelf marketplace SaaS forced their flows, hid the data, and made payouts opaque. A bespoke platform was the only way — but it had to ship in months, not quarters, with a lean team.
After
A production marketplace with a buyer Android app, a web storefront, a seller console and an admin panel. One architecture, one deployment pipeline, one CTO. Every invariant around inventory, orders and payouts is enforced at the service layer where it matters.
ZammerNow was briefed as both products at once. The architecture had to accommodate growth without being a monument to premature scaling.
“Uddit has a very natural way of communicating complex technical ideas. He sat through our internal workflows patiently and pointed out inefficiencies we had been ignoring for months.”
Two products, one spine.
The split at the gateway is the central design choice. Buyer traffic and seller traffic share the same code paths for reads they have in common (catalog, orders) but diverge in their authorisation model, their rate limits, and their cache strategies. A buyer viewing a product and a seller editing it are two verbs on the same noun.
Building a marketplace or two-sided platform?
Marketplaces fail in predictable ways — inventory races, payout drift, catalogue consistency. If you're mapping these, a 30-minute scoping call usually saves weeks of wrong turns.
Order orchestration — the sequence.
An order on a multi-vendor marketplace is deceptively layered. A single cart may split across multiple sellers, each with their own fulfilment, each with their own payout. The order service composes this into one transaction for the buyer and many transactions for the books.
DevOps — operated from the CLI.
Infrastructure is provisioned and operated almost entirely from aws-cli scripts — repeatable, versioned, auditable. The deployment model is simple: containerised services, pushed to ECR, run on EC2 behind a CloudFront distribution, with Route 53 at the edge. CloudWatch collects logs and metrics; CloudTrail keeps the paper trail. Everything a new engineer needs to understand the stack is a README away.
Theory of strong design.
A marketplace collapses under its own weight unless the invariants are explicit. We wrote ours down on the first page of the handbook.
The invariants were few: no seller can affect another seller’s inventory; no order transitions without a ledger entry; every payout is reconstructable from payments and returns. Every service is measured against those three lines.
The cost of violating them would be financial, reputational, and subtle. Invariants tell you where to put the tests, where to put the alarms, and where to refuse clever shortcuts.
How it was built.
Build timeline
- Architecture & designMonth 1
Buyer/seller split, invariant definition, data model, AWS topology, CLI runbook.
- Core commerce servicesMonth 2–3
Catalog, cart, checkout, order, inventory — wired end-to-end with Razorpay and payouts.
- Buyer Android appMonth 3–4
React Native client, Play Store release flow, push notifications via FCM.
- Seller console + adminMonth 4–5
Listing workflows, inventory editor, payout dashboard, moderation tools.
- Hardening + go-liveMonth 5–6
Load testing, invariant audits, CloudWatch dashboards, production cutover.
Outcome.
ZammerNow shipped as a live product with active users, buyers on Android, sellers on the web, and an admin that keeps both sides in harmony. The architecture has absorbed new categories and new payment rails without a rewrite.
Have a marketplace, D2C or CTO-less startup?
I work as a fractional CTO or build-and-handover architect for early-stage teams. From zero to a live product with a real tech strategy — email or book a call.