Back to selected work

Migration Tooling Case Study

Fun Tone

A TypeScript migration toolkit for extracting, validating, and normalizing report template configurations from a proprietary vendor format into a stable internal schema.

This project addressed a migration path with no official export API. The vendor's template configuration was only accessible via runtime proxy responses — so the toolkit built a reliable extraction pipeline from HAR captures and live proxy calls, with zod-validated normalization at the boundary and a web-based studio for migration review.

Core result

Extraction without an export API

The toolkit proved a reliable end-to-end migration path for report templates from a proprietary format — using HAR-based extraction and a deterministic normalizer — without reverse engineering or decryption.

Validation layer

zod schemas

Every input shape is validated before transformation. Invalid payloads fail loudly and explicitly, not silently.

Extraction fallback

HAR + proxy

When a clean list endpoint was unavailable, HAR browser exports provided a reproducible extraction fallback.

Transform design

Deterministic

Same input always produces the same output. The transformer is pure — no filesystem or network dependencies.

01

Problem

Report templates from the existing vendor system needed to migrate to an internal format, but the vendor provided no official export API. Template configuration — field placements, layout coordinates, style options — was only accessible by making authenticated runtime API calls or capturing the responses from browser traffic. Manual migration was not scalable.

02

Constraints

The extraction approach had to work within strict boundaries. The vendor's input shape was inconsistent across real-world samples — service_files could arrive as a single object or as an array, and items could be page-keyed or already flat. Any extraction approach also had to be reproducible by the whole team, not just the original author.

  • No official export endpoint — all template data came from runtime proxy responses
  • Input shape inconsistencies required defensive normalization before any mapping could run
  • PDF template asset binaries were not reliably obtainable from the same source
  • The toolkit had to be runnable offline, with no dependency on a live vendor connection

03

Approach

The toolkit separated extraction from transformation. HAR captures or live proxy responses provided raw input; a zod schema layer made input inconsistencies explicit and catchable before normalization ran. The transformer itself is a pure function — same input, same output, no side effects — which made it easy to test against real-world samples before committing to the migration.

  • Input normalization handles both array and single-object service_files shapes transparently
  • Page-aware item flattening supports future multi-page template scope without a rewrite
  • Transformer is isolated from filesystem and network — safe to run in batch or test without a vendor connection
  • CLI scripts for sample runs, folder batch processing, and HAR asset extraction

04

Schema Design

Defining the internal schema before writing the transformer was the most important upfront decision. It created a stable target that the normalizer could map toward — and made the vendor's inconsistencies a transformation concern rather than a data integrity problem.

  • reportName and reportCode as stable human-readable and machine-readable identifiers
  • Placement coordinates normalized to numeric types with coercion guards for string inputs
  • Style codes mapped through an alias table — textAlign codes 1/2/3 become left/center/right
  • pageKey reserved in the schema for future multi-page support without breaking existing output

05

Outcome

The toolkit proved the migration path end-to-end: raw vendor API responses in, validated and normalized internal JSON out. The deterministic transformer and HAR fallback together covered both automated and manual extraction scenarios. The web studio gave the migration a review surface that did not require engineers to read raw JSON for every verification step.

Continue Exploring

Kakeibo Budget App is also live.

Each case study covers a different part of the stack — see how the same engineering principles show up across different problem types.

Read Kakeibo Budget App