Back to selected work

Backend Integration Case Study

Record Sync Service

A webhook-driven integration service for canonical record sync, deterministic ID resolution, and diagnostics that make mismatches explainable.

This project turned a messy, repetitive reconciliation workflow into a reliable backend surface with clear sync rules. The goal was not just to move data faster, but to make operators trust what had synced, what had conflicted, and what needed intervention.

Reliability gain

Deterministic sync flow

The public rebuild emphasizes a sync path that is explainable from webhook intake through canonical fetch, identity matching, and diagnostics.

Integration posture

Multi-system reconciliation

The service is framed around keeping overlapping record sources aligned without hidden writes.

Failure mode

Legacy constraints

Rate limits and batch-only windows shaped the architecture in practice.

Operator goal

Explain every mismatch

Diagnostics had to be useful to support and operations teams, not only backend engineers.

01

Problem

Customer records lived across a dozen systems with overlapping identifiers, partial updates, and inconsistent source-of-truth behavior. Manual reconciliation consumed hours every week and still left room for duplicate creation or stale support context.

02

Constraints

The integration surface was shaped by systems that were already in production, not by ideal APIs. Some sources were event-friendly, others required delayed canonical fetches, and several imposed rate limits or narrow batch windows for safe writes.

  • Legacy APIs with inconsistent update semantics
  • Batch windows for certain write operations
  • Noisy source payloads that still needed deterministic outcomes

03

Approach

The service separated ingestion from reconciliation. Webhooks triggered the sync flow, but every important write still passed through canonical record fetches, deterministic ID resolution, and explicit duplicate-match guards before touching the downstream support surface.

  • Webhook ingestion for freshness without trusting payloads blindly
  • Canonical fetch before write to reduce drift
  • Deterministic ID resolution to avoid accidental duplicate creation

04

Routing Architecture

The service used a sync-rules configuration file to define per-app routing without code changes. Each rule specified which kintone app to watch, which field to use for identity lookup, and how kintone fields mapped to downstream fields — including an optional write-back to flag synced records.

  • sync-rules.json maps appId → lookup field → field mapping → downstream update spec
  • Multiple kintone apps share one webhook handler — routing is config-driven, not branched code
  • Write-back field updates the kintone record with the downstream ID after a successful sync
  • Fallback lookup handles identifier format variants (e.g. WO-1234 vs 1234) without custom code per app

05

Diagnostics

A key product choice was to make the integration debuggable by design. Operators needed to understand what record was seen, how it was matched, why a write was skipped, and what to do next without reading raw infrastructure logs.

  • Diagnostics endpoint for sync history and mismatch reasons
  • Conflict states that separated retryable issues from data-quality issues
  • Audit-friendly traces for duplicate guard decisions

06

Outcome

The system became reliable because every sync decision was both deterministic and explainable. Accuracy stayed high, reconciliation work dropped sharply, and support teams got fresher record context without engineering acting as the interpreter every time something drifted.

Continue Exploring

Internal Metrics Dashboard is also live.

Each case study covers a different part of the stack — see how the same engineering principles show up across different problem types.

Read Internal Metrics Dashboard