When working with Shopify at low to moderate volume, webhooks appear deceptively simple. You register an endpoint, Shopify sends events, and your system reacts accordingly.
At higher volumes, or when multiple downstream systems depend on those events, a less obvious problem emerges: Shopify webhooks are reliable, but not unique.
This article documents an issue we encountered while processing high-volume Shopify order events and the architectural changes required to make the system robust.
The Problem We Encountered
We were ingesting orders/create and orders/paid webhooks into a workflow service responsible for:
- validating orders
- enriching them with external data
- routing them to fulfilment providers
Under normal load, everything behaved as expected.
Under burst traffic, however, particularly during pre-order launches, we began to see:
- duplicate fulfilment attempts
- repeated API calls to couriers
- occasional double-reservation of stock
Crucially, Shopify was not “misbehaving”. The issue lay in our assumptions.
Webhooks Are Delivered
At Least Once
Shopify’s webhook delivery model is at-least-once, not exactly-once.
This means:
- The same webhook can be delivered more than once
- Retries can occur after network timeouts
- Events can arrive out of order
At a small scale, this rarely causes visible issues. At volume, it becomes unavoidable.
Relying on “we haven’t seen duplicates yet” is not a safe strategy.
Why Naïve Deduplication Fails
Our initial instinct was to deduplicate based on:
- webhook ID
- payload hash
- event timestamp
Each approach had weaknesses:
- webhook IDs change across retries
- payloads can differ slightly (e.g. metadata updates)
- timestamps are not guaranteed to be unique
The result was a system that usually worked, but failed under exactly the conditions where reliability mattered most.
Idempotency as the Primary Design Constraint
The solution was not better retry logic or more aggressive filtering. It was idempotency.
Instead of asking:
“Have we seen this webhook before?”
We reframed the problem as:
“Can this operation safely run more than once?”
This led to a fundamental change in how we processed Shopify events.
The Practical Implementation
Rather than treating webhooks as instructions, we treated them as signals.
The workflow became:
- Receive webhook
- Persist a minimal, canonical representation of the event
- Transition the related order or line items through explicit states
- Allow repeated webhook processing without side effects
Key principles:
- Every state transition was atomic
- External API calls were guarded by state checks
- Operations became safe to re-run
A duplicated webhook no longer caused duplication — it became a no-op.
Why This Matters for Fulfilment and Finance
The cost of getting this wrong is not theoretical.
In our case, failures manifested as:
- Duplicated courier bookings
- Inconsistent stock availability
- Reconciliation mismatches downstream
Once idempotency was enforced, these issues disappeared — not because Shopify changed, but because the system stopped assuming ideal conditions.
Lessons Learned
A few takeaways that may help others working at a similar scale:
- Treat Shopify webhooks as unreliable messengers, not commands
- Design workflows so repeated execution is safe
- Avoid coupling webhook delivery directly to irreversible actions
- Assume retries will happen, because they will
These considerations are rarely visible in simple Shopify builds, but become critical as volume and operational complexity increase.
Closing Thought
Most Shopify problems are not “Shopify problems”. They are distributed systems problems that only become visible once real-world constraints apply.
Webhooks are powerful, but only if your architecture honours the guarantees they provide, not the ones we assume.


Leave a Reply