Managing 200+ products without losing your mind
The ops systems, internal tools, and small habits that keep a large catalog calm — and keep customer support fast.
The complaint came in through support. A customer had paid for a product that wasn’t actually available. The listing said it was. The description described something slightly different from what they received. It was the kind of mistake that’s embarrassing to explain, hard to defend, and — at the scale DigitaVision operates — completely predictable if you’re still trying to manage everything manually.
That ticket was the moment I stopped pretending the old approach was working.
The problem isn’t the products, it’s the drift
At 200+ products, nothing stays static. Prices change. Items go in and out of availability. Server-side product IDs — the ones that actually drive how the site operates — need to stay in sync with what the catalog says. Every gap between what the catalog says and what the site does is a future support ticket waiting to happen.
This is the thing that doesn’t get talked about in catalog management: it’s not a setup problem, it’s a maintenance problem. You can get everything right once. Keeping it right is a different job entirely.
The naive approach is to check everything manually. I tried. Somewhere around the 80-product mark, that stopped being realistic. By 200 products, reviewing the catalog every two or three days — which is the minimum cadence to catch issues before customers do — would have taken hours each time. It just doesn’t fit in a real schedule.
The check that replaced hours of manual review
The system I ended up building isn’t automated in the way you might expect. There’s no script running on a server. It’s a workflow I run myself, with Claude doing the comparison work that used to eat the most time.
Here’s what it looks like:
Step 1 — Pull the live data. The company site has an API endpoint that returns all product data as raw JSON: name, price, whether it’s recurring, product type, duration, everything. This is what the site is actually serving.
Step 2 — Compare against the Notion catalog. I maintain a Notion database that I keep updated manually — it’s my source of truth. It has the same fields as the API: price, stock status, product IDs, availability, type. I’ve built it up over time and update it as products shift with market trends.
Step 3 — Cross-check the server files. Some products have IDs referenced in access files on the server. These files control how the site operates. I check those too — wrong IDs at that level cause functional problems, not just display ones.
Step 4 — Feed it all to Claude. I give Claude the API data and the Notion data together and ask it to find the mismatches. It reads both, compares every field, and returns a list of products where something is off. For anything that needs fixing, it outputs the corrected format directly — I copy-paste the edits.
What used to take hours now takes minutes. Not because I automated it — because I stopped reading 200 rows manually and started asking the right tool to do the comparison for me.
This runs every two to three days. A few times a month it catches something real before it turns into a complaint. That’s the gap the whole system exists to close.
On the AI tool stack: no dogma
I use Claude, ChatGPT, and Gemini. There’s no strict assignment for which tool does what — I use whichever one gives the sharpest output for the specific job in front of me. Claude handles the structured comparison work well. For writing, any of them can draft a product description; I edit whatever comes back.
The interchangeable approach took some adjustment. I kept wanting to pick a primary tool and commit to it. The reality is that these tools have different strengths on different days for different tasks, and treating them like different search engines — use the one that returns the best result — is more practical than building theology around any of them.
New product descriptions follow the same pattern: AI drafts a first version, I edit it down into something accurate and appropriately specific. The draft gets about 70% of the way there. The last 30% is the product knowledge that doesn’t exist anywhere in training data.
What keeps the catalog honest: the Notion source of truth
The Claude comparison is only as useful as the data you bring to it. If the Notion catalog is wrong, the audit is wrong too.
The Notion database is a mix of structured data (product fields: price, stock status, IDs, duration, type, recurring flag) and supporting docs for context I can’t put in a field. I update it as I learn things — when a product’s pricing tier changes, when something moves out of active stock, when a server-side ID gets updated.
This isn’t the interesting part of the system. It’s also the most important part. Everything downstream — the audit, the AI comparison, the copy-pasted corrections — depends on this database being current. Neglecting it means the checks return clean results on stale data.
What I’d skip if I started over
Spreadsheets past the first 50 products. Spreadsheets are fine early. They become actively dangerous as volume grows — too many cells touching the same data, no enforced structure, no history. The moment you have more products than you can scroll through in one sitting, it’s time for something with actual database properties.
Reactive-only maintenance. For the first stretch, my approach was fix-it-when-it-breaks. That works until you’re dealing with customer-facing failures, and then you’re always behind. The audit habit — scheduled, recurring, before anything goes wrong — changes the whole relationship with catalog quality.
Expecting one tool to handle everything. The instinct is to find the single right tool and use it exclusively. The catalog work spans too many different task types for that to be practical. Description writing, structured data comparison, market research, error flagging — different tools, used for what they’re each good at.
What the system still can’t fully solve
The hardest ongoing problem is products with unstable market positions. Some products are fine — stable pricing, reliable availability, minimal maintenance. Others are volatile: prices shift, availability flips, supplier reliability varies. For those, no audit cadence is fast enough to stay fully ahead of it.
The system catches drift. It doesn’t prevent volatility. That distinction matters because it shapes what I expect the tools to do. The goal isn’t zero catalog errors — at this scale, that’s not realistic. The goal is catching errors before customers do.
Every workflow, every audit, every cross-check is aimed at closing that gap. So far, it’s the only approach I’ve found that actually holds.
Mezbah Uddin
Product Manager at DigitaVision LTD, founder of Royal Subz. Writing about building, shipping, and growing SaaS products.