Right now
AI-assisted operations tooling
Building interfaces where AI does the heavy lifting but operators stay in control — visible state, safe retries, clean handoffs.
Raffi Windarto
Osaka / Remote
Systems, workflows, and trusted operator tools
Product-minded Software Engineer — Osaka / Remote
I build internal products, operator surfaces, and AI-assisted workflows that stay calm when the work gets messy.
Over the last 3+ years, I've focused on internal tools, integration layers, and operations software where clarity matters as much as raw capability.
0+ years
shipping internal tools and workflow software
Ops-first
interfaces for teams working under pressure
Verification-led
automation that stays explainable and safe to trust

Raffi
Engineer building calm, practical systems for messy real work.
Fast Read
Good fit if you need someone who can turn messy operational work into software people actually trust.
Best fit
What teams get
Quick proof
Nine case studies are live, six with interactive demos — you can go from headline to actual product surface in one click. The Kakeibo and Record Sync demos have the most moving parts.
Right now
Building interfaces where AI does the heavy lifting but operators stay in control — visible state, safe retries, clean handoffs.
Actively exploring
Webhook-driven sync, canonical record fetching, deterministic ID resolution — the backend decisions that make systems stay honest.
Always cared about
Interfaces that make the next right action obvious — even when someone is tired, rushed, or inheriting someone else's context.
Systems that operators trust when it matters most.
A queue-first operations console for monitoring work, choosing runtimes, and keeping handoffs readable when AI tasks start to sprawl.
A webhook-driven sync service that fetches canonical records, resolves identity safely, and makes mismatches explainable instead of mysterious.
A self-service analytics surface for operators who needed fast answers, stable filters, and views that made sense without a BI manual.
How I think about building systems that last.
Operators need confidence in their tools. Every feature ships with clear feedback, predictable behavior, and graceful degradation.
Large changes create large risks. I prefer incremental delivery that maintains stability while moving toward better architecture.
The person using the system at 2am during an incident shouldn't have to guess what a button does or what state they're in.
A task isn't done until it's verified. Build in checkpoints, confirmations, and audit trails so nothing slips through.
Where I focus my energy.
Admin panels, operator interfaces, and back-office systems designed for power users who need efficiency.
End-to-end automation of manual processes with proper error handling, notifications, and recovery.
Connecting disparate systems through APIs, event streams, and data pipelines with robust sync guarantees.
Information-dense interfaces that surface what matters without overwhelming. Built for decision-making.
Migrations, refactors, and performance improvements that maintain service while improving architecture.
The thinking behind the work.
Operations team could not see enough across AI task execution to notice trouble early or hand work off cleanly.
Had to bridge multiple runtimes without dumping more raw system noise onto the people using it.
Normalized task state and routing context into one inbox, then treated trust signals and handoff continuity as product features.
Created a calmer path from task state to next action, especially when incidents were still unfolding.
Overlapping records across many systems made reconciliation repetitive, slow, and easy to get subtly wrong.
Legacy APIs, rate limits, and narrow write windows mattered more than clean greenfield architecture ideas.
Separated ingestion from reconciliation, then used canonical fetches, deterministic identity rules, and readable diagnostics.
Turned sync behavior into something support and operations teams could reason about without backend archaeology.
Operators needed better answers from live metrics, but every new view still required an engineering ticket and a wait.
The dashboard had to reconcile different schemas while staying fast and legible during daily operational use.
Focused on searchable views, guardrailed filtering, and labels that favored operator language over analytics jargon.
Moved routine reporting closer to the people doing the work instead of routing every question back to engineering.
Non-technical users couldn't create or publish kintone-connected forms without filing a developer ticket for every change.
kintone API tokens had to stay server-side, public form access couldn't require respondent login, and multi-tenant isolation had to hold at the database layer.
Magic link auth, token-based public URLs, server-side form rendering from kintone field schema, and RLS-enforced per-user isolation.
Form creation became fully self-service — no developer involvement after initial setup, and submissions flow directly into kintone.
Sharing kintone data externally meant handing over admin credentials or building custom views every time — there was no middle ground.
Four different access modes had to work without ever exposing kintone credentials to the client, regardless of how a viewer was configured.
Server-side kintone proxy for all access modes, schema cache in Supabase, per-request access logs, and viewer status independent of access mode.
Teams can now publish specific kintone views externally with the right access model — public, password-gated, or login-required — without credential exposure.
Existing budgeting apps were too complex (bank connections, mandatory sign-up) or too simple — no multi-currency support, no fast-entry flow, no real spending reflection.
Had to work offline from day one, support native and web from one codebase, handle multiple currencies without becoming a bank app, and require zero sign-up before tracking the first transaction.
Guest mode with local Repository abstraction (SQLite / IndexedDB), per-account currencies with live rate conversion, Kakeibo groups for category discipline, OCR receipt scanning behind a credit system, and monthly reflection journaling.
A cross-platform, multi-currency budgeting app with OCR scanning, 5 account types, CSV import/export, and a daily habit loop that works offline, in-browser, or synced — without the sign-up friction that kills adoption.
No existing app handles the JP↔ID learning pair well — generic tools miss real-life Japanese scenarios and don't account for the structural differences Indonesian speakers face.
Had to work offline from day one, support bilingual direction switching as a first-class feature, and keep content extensible without a re-deploy.
Seeded quiz engine for reproducible encounters, review queue for missed-question tracking, Supabase content catalog behind a feature flag with bundled JSON fallback, and RPG mechanics to sustain the daily habit loop.
Live bilingual learning app with N5 quest progression, adaptive review, and a content pipeline that accepts new quests as database writes — no code change required.
Operations staff needed real-time LINE notifications when kintone records changed, and users needed a way to respond via LINE — without accessing kintone directly or exposing API tokens in the browser.
kintone customization runs in the browser, so LINE tokens had to stay server-side. GCP Functions cold start added latency, and Reply API tokens expire in 30 seconds — wrong API mode causes silent failures.
Hard boundary between Reply API (user-initiated postbacks only) and Push API (all status change notifications). kintone.proxy() for browser-side LINE calls. HMAC signature verification before any payload processing.
Real-time status notifications flowing to LINE with two-way postback handling writing back to kintone — messaging costs predictable, delivery reliable, no tokens in the browser.
Time-sensitive records in kintone had no urgency signal in native views — every deadline looked the same. Staff also needed to pause records intentionally without breaking the status flow.
kintone's 500-record API limit required cursor pagination. The pause mechanism had to be additive — no changes to existing process management rules or status field logic.
Separate stay flag field for intentional holds, days-remaining computed server-side as the primary sort key, urgency tiers driving row color coding, cursor-based pagination preventing silent truncation.
Operations teams got a morning triage queue sorted by actual urgency. Pause mechanics kept KPI counts clean, and the dashboard replaced direct kintone access for daily deadline work.
Report templates needed to migrate from a vendor system with no official export API — the only data source was runtime proxy responses and browser traffic.
Input shapes were inconsistent across real-world samples, PDF assets weren't reliably obtainable, and the toolkit had to be reproducible offline by the whole team.
zod-validated input normalization at the boundary, a pure deterministic transformer with no side effects, and HAR-based extraction as the fallback path.
Proved the migration path end-to-end — vendor API responses in, normalized internal JSON out — without reverse engineering or a live vendor connection.
I like software that helps someone make the next good decision.
Most of my work has lived in the unglamorous middle of a business: internal tools, operator workflows, integration layers, and the systems people depend on when something has already gone a little sideways.
That kind of software needs a different quality bar. It has to stay legible when someone is tired, rushed, or inheriting context from another teammate. A useful interface is not the one with the most state. It is the one that makes the next step obvious.
So I tend to work across product and engineering at the same time: clarify the workflow, cut noise, make failure states explainable, and ship the verification steps that let people trust automation without hand-waving.
Open to product engineering roles and select builds
Best fit: internal tools, ops software, AI-assisted workflows, and reliability-heavy products.