# Karanveer Singh Shaktawat — Full Stack Engineer & Infrastructure Architect Full Stack Engineer. Building scalable systems, developer tools, and AI-powered applications. Contact: karan@dharmicdev.in Location: Kota, Rajasthan, India (IST (UTC+5:30)) GitHub: https://github.com/xczer LinkedIn: https://linkedin.com/in/karanveersinghshaktawat Site: https://yourportfolio.example --- # Projects ## Prachyam Sangam **Subtitle:** OTT Platform — 9 Platforms, 1 Monorepo **Category:** fullstack **Tech:** Nx, Next.js, React Native, Expo, Tauri, Bun, Elysia, TypeScript, PostgreSQL **URL:** https://yourportfolio.example/projects/prachyam-sangam **Live:** https://prachyam-sangam.dharmic.cloud Ground-up rewrite of a broken 18-developer OTT codebase into a 21-package Nx TypeScript monorepo targeting 9 platforms: Next.js web, React Native iOS/Android, Apple TV, Android TV, Roku, Tizen, webOS, and Tauri desktop admin. Solo build. ## Overview Prachyam Sangam is the ground-up rewrite of Prachyam Studios' OTT platform — a project I inherited as a broken, 18-developer codebase and rebuilt solo into a 21-package Nx TypeScript monorepo targeting 9 distinct platforms: Next.js web, React Native iOS and Android, Apple TV, Android TV, Roku, Tizen (Samsung), webOS (LG), and a Tauri desktop admin panel. The previous platform (see Prachyam Legacy) had accumulated 1,669 commits of technical debt across incompatible stacks. Sangam is the clean-room successor — designed from the monorepo boundary outward, with every cross-platform contract defined at the package layer before a single UI component was written. ## The Challenge The core problem with multi-platform OTT is that each target has fundamentally different rendering, input, and DRM models. A TV remote is not a touchscreen. Roku's SceneGraph XML is not React. Tizen and webOS each have their own app packaging and certification requirements. The naive approach — write one UI, bolt on platform shims — produces an unmaintainable mess at exactly the scale the previous platform had reached. The constraint I set: shared business logic, typed contracts, zero copy-paste between platform packages. ## Architecture The monorepo is structured in three layers: **Core packages** (`@sangam/core`) define domain models, API client, auth, and player state as pure TypeScript with no DOM or RN dependencies. Every platform imports from here. **Shared UI packages** split by rendering target — `@sangam/ui-web` (React + Tailwind), `@sangam/ui-native` (React Native + NativeWind), `@sangam/ui-tv` (React Native TV). Platform-specific components extend shared primitives; none duplicate logic. **Platform apps** (`apps/web`, `apps/ios`, `apps/android`, `apps/apple-tv`, `apps/android-tv`, `apps/roku`, `apps/tizen`, `apps/webos`, `apps/admin`) are thin shells that wire platform entry points to the shared packages. Nx handles task orchestration, build caching, and affected-project detection. A change to `@sangam/core` triggers rebuilds only for packages that depend on it — not the full workspace. ## Platform Notes **Roku** required the biggest context switch: SceneGraph XML with BrightScript scripting, a completely different mental model from component trees. The player integration alone took two weeks of Roku device testing. **Tizen and webOS** share a React-based renderer but have divergent packaging, signing, and store submission requirements. Both target smart TV input models (D-pad navigation, long-press, focus rings) that need custom focus management — React's synthetic event model doesn't map cleanly to TV remotes. **Apple TV and Android TV** use the React Native TV fork with tvOS/leanback-specific navigation. Focus management is handled through a custom `useTVFocus` hook that wraps the platform's native APIs. **Tauri admin** is the desktop management panel for studio staff — content ingestion, scheduling, analytics, and the DRM key management interface. ## MCP Integration 10 custom Model Context Protocol servers (~280 tools) give Claude Code live read access to the Postgres schema, BullMQ job queues, MinIO buckets, Docker service state, and Prometheus metrics across the stack. This compressed the CLAUDE.md context by 69% and allowed AI-assisted debugging without manually pasting database state into prompts. ## Key Decisions **Nx over Turborepo** — Nx's task graph and affected detection are more granular for a workspace this large. Turborepo's simpler model would have required more manual cache configuration at 21 packages. **Elysia over Hono** — Elysia's end-to-end type safety (request → response → client SDK) via the treaty plugin eliminated an entire class of runtime errors between the API and platform consumers. **Bun as runtime** — The monorepo toolchain runs entirely on Bun. Script execution and test runs are meaningfully faster than Node across 21 packages, which compounds over hundreds of daily builds. ## Prachyam Infrastructure **Subtitle:** Self-Hosted Enterprise Stack **Category:** infrastructure **Tech:** Docker, Mailcow, NextCloud, Tailscale, Nginx, Caddy, dnsmasq, Linux **URL:** https://yourportfolio.example/projects/prachyam-infra **Live:** https://prachyam-infra.dharmic.cloud Deployed and managed self-hosted Mailcow, NextCloud, and Tailscale mesh VPN — handling 18M emails across 12 domains, saving lakhs annually. ## Overview When I joined Prachyam Studios, the company was spending heavily on commercial services — Google Workspace for email, LucidLink for file sharing, scattered tools for development. I proposed self-hosting everything on our own infrastructure and built the entire stack from scratch. ## Infrastructure Overview Nginx --> Mailcow Mailcow --> Rspamd --> ClamAV Users --> Nginx --> NextCloud NextCloud --> LDAP MCP --> Qdrant Dev1 <--> Caddy Dev2 <--> Caddy Dev3 <--> Caddy VPS <-.->|Tailscale| Mesh `} /> ## Before vs After ## The Challenge A growing 22-person team needed reliable email across 12 domains, shared file storage and collaboration, a private development network, and an OTT platform targeting 12+ devices — all without the budget for enterprise SaaS pricing. ## Mail Server — Mailcow Deployed a self-hosted Mailcow mail server on Docker that handled **18 million emails** across 12 domains. This wasn't just internal email — it powered marketing campaigns and business communication at scale. Configuration went deep: SPF, DKIM, and DMARC records for every domain to ensure deliverability. Rspamd for anti-spam filtering. ClamAV for virus scanning. Automated backups to prevent data loss. The cost comparison was stark: Google Workspace at $6/user/month across 12 domains adds up fast. Our self-hosted solution runs on a single VPS at ~₹800/month. ## File Sharing — NextCloud Set up NextCloud as a self-hosted replacement for LucidLink. The 22-person team gets file sharing, real-time document collaboration, calendar management, and video meetings — all running on our own servers. LDAP authentication keeps user management centralized. Storage quotas prevent anyone from filling the disk. Automated backups run nightly. LucidLink costs ~$25/user/month. For 22 users, that's $550/month (~₹5.5 lakh/year). Our NextCloud instance runs on the same VPS alongside Mailcow. ## Development Network — Tailscale + Caddy + dnsmasq The company's machines were underpowered for the development workload. Instead of buying new hardware, I networked the existing machines together into a private mesh. Tailscale creates a mesh VPN where every machine can reach every other machine securely. dnsmasq handles custom DNS resolution so services are accessible by name. Caddy provides automatic HTTPS with valid certificates on the private network. The result: developers distribute builds, tests, and services across multiple machines as if they were one system. ## AI Tooling — 23 MCP Servers Built 23 custom Model Context Protocol servers for AI-assisted development. A Qdrant vector database indexes the entire codebase for semantic search — developers can ask natural language questions about the code and get accurate answers. Each MCP server handles a specific capability: codebase Q&A, deployment automation, log analysis, documentation generation, and code review assistance. The tooling integrates with Claude, GPT, and other LLM providers. ## Cost Impact ## Results The infrastructure serves a 22-person team reliably with 18M emails processed, 12 domains managed, file sharing and collaboration running smoothly, and AI-assisted development tooling that made the entire team more productive. Total infrastructure cost: ~₹800/month VPS. Estimated commercial equivalent: lakhs annually. ## HeyBeautiful **Subtitle:** Tell your crush without saying a word **Category:** fullstack **Tech:** Next.js, TypeScript, PostgreSQL, Drizzle ORM, Dragonfly, Stripe, GSAP, Framer Motion, Tailwind CSS, Turborepo, Bun **URL:** https://yourportfolio.example/projects/heybeautiful **Live:** https://heybeautiful.one Built a full-stack SaaS platform where anyone can create a cinematic, scroll-driven personalised page for their crush and share a link — solo, from zero to live production in 41 days with 207/207 roadmap items completed. ## Overview HeyBeautiful started with a personal problem: the gap between noticing someone and saying something real is broken. Phone numbers are cold, dating apps reduce people to thumbnails, and pickup lines die in ten seconds. The answer was a scroll-driven, animated micro-site that anyone can build for their crush — personalised photos, word-by-word text reveals, puzzles, trait cards — and share as a link. No app download for the viewer, no pressure, no performance required. The creator builds a page in a visual drag-and-drop editor with 30+ section types: text letters, Polaroid galleries, sealed envelopes, iMessage-style chat replays, origami reveals, scratch-cards, Spotify embeds, ambient audio, and more. They set a unique slug, pick from 8 emotional colour palettes, and publish. When the crush opens the link they experience a story that unfolds as they scroll: text fills colour word-by-word, photos develop Polaroid-style, background blobs breathe through the mood palette. At the end they choose whether to reveal the creator's contact — WhatsApp one-tap, vCard, or QR code — entirely on their own terms. The core emotional differentiator is real-time presence: the creator sees when the crush opens the page, which section they are reading at any moment, and receives Web Push notifications when a puzzle is solved or contact is revealed. The platform shipped live at heybeautiful.one and was architected from day one as white-label ready for ThemeForest. ## The Challenge The project had a hard constraint: 41 days, solo, alongside two other active freelance projects. The product required genuinely complex frontend engineering — scroll-driven animation across 30+ section types, a polymorphic drag-and-drop editor, a real-time presence layer — on top of a complete SaaS backend: payments, magic-link auth, 13 AI writing endpoints, 4-language i18n, PWA with offline support, custom domain middleware, and a content moderation admin panel. Every architectural decision had to optimise for shipping speed without creating debt that would block the ThemeForest path. The presence system was the most critical architectural question. Supabase Realtime was the obvious choice but would lock the project to a cloud vendor and struggle under high-frequency scroll events. WebSockets are awkward in Next.js App Router's serverless model. The product's core emotional promise — "see when she opens your page" — could not be a known gap at launch. ## Architecture The stack is a Turborepo monorepo with Bun workspaces. Next.js 16 App Router, React 19, Tailwind CSS v4. GSAP 3 with ScrollTrigger drives scroll-based animations; Framer Motion handles UI transitions; Lenis provides smooth scroll. Drizzle ORM manages a 30-table PostgreSQL 17 schema across 36 API route groups. Page section content is stored as a JSONB column on `crush_pages` rather than relational rows. Sections are always read and written together; a relational table would require either a column per section type or a wide EAV model. Drizzle's `jsonb()` type keeps TypeScript inference clean at the cost of no SQL-level filtering inside section content — acceptable at this scale. Real-time presence runs on Dragonfly pub/sub via `ioredis`. The viewer-side `usePresenceBroadcast` hook broadcasts scroll depth and current section index, throttled to a delta greater than 5% or a section change — reducing event volume by roughly 20x while keeping creator visibility accurate. Stripe handles four subscription tiers; the singleton is marked `server-only` to guarantee the secret key can never appear in a client bundle. ## Key Decisions **JSONB sections over relational rows.** Sections are polymorphic — a text block, a Polaroid gallery, and a Spotify embed share a table row but have entirely different shapes. JSONB stores each as a typed object without requiring schema migrations for every new section type added during the sprint. **Dragonfly over Supabase Realtime or WebSockets.** Self-hostable, faster for high-frequency scroll events, and the pub/sub SSE pattern is idiomatic with the App Router's serverless model. Locking to a cloud vendor for the product's most important feature was the wrong trade. **White-label config package from day one.** `siteConfig`, `themeConfig`, and `emotionalColors` live in a shared `packages/config` workspace. A ThemeForest buyer can rebrand the entire product by editing one file. This cost roughly 10% extra upfront and saved all component surgery at handoff. **Graceful degradation on every optional service.** Stripe, Anthropic API, VAPID push, and MinIO are all behind runtime guards. Any one missing returns a clean 503 or silently hides the feature in UI. The app runs with just PostgreSQL and Dragonfly — the self-hosted happy path. **Emotional arc through animation timing.** A tempo map in `ANIMATIONS.md` defines GSAP `duration` by section type: slow (2–3s) for introduction, very slow (3–5s) for the intimacy section. Pacing is a first-class design constraint enforced at the component level, not left to intuition. ## Results HeyBeautiful shipped live at heybeautiful.one with all 207 planned roadmap items completed across 35 feature phases — 44,000+ lines of TypeScript, 418 source files, 30+ animated section types, 36 API route groups, 13 Claude-powered AI writing endpoints, 4 Stripe tiers, 4-language i18n with RTL, a full PWA with offline service worker, and Playwright E2E tests. The white-label config package and Docker Compose self-host path position it for ThemeForest submission. The real-time presence system — section-level granularity, Web Push at four milestones — is the feature that separates it from every static link-sharing tool on the market, shipped as part of a 41-day solo sprint alongside two other active freelance projects. ## Self-Hosted Cloud Storage **Subtitle:** Nextcloud replacing Google Drive and LucidLink **Category:** infrastructure **Tech:** Nextcloud, rclone, pCloud, Docker Compose, Caddy, Tailscale, OAuth2, Mailcow **URL:** https://yourportfolio.example/projects/prachyam-cloud-storage Replaced Google Workspace and LucidLink for a 22-person two-office studio with a self-hosted Nextcloud instance on RackNerd, backed by a 5 TB pCloud lifetime plan via async rclone sync, cutting $9,585/year in SaaS spend to $65/year after a 25-day payback. ## Overview Prachyam Studios was running Google Workspace Business Standard and LucidLink Business side by side — one for email and documents, one for large video file sharing across two offices. Combined, the two products cost ~₹8.90 L/year (~$9,585/year) for a 22-person team. LucidLink is particularly punishing for video-heavy teams because it charges per seat and meters egress on assets that are regularly hundreds of gigabytes. The replacement is a self-hosted Nextcloud instance on a ₹6,000/year (~$65/year) RackNerd VPS, backed by a pCloud 5 TB lifetime plan as durable cold storage, with rclone handling async sync between the two tiers. Mailcow's OAuth2 endpoint — already running for team email — serves as the login provider, so team members sign in with their existing `@prachyam.com` credentials. From the user's perspective it is Google Drive: a desktop sync client, a web UI, shared folders, no new passwords. Year 1 cost including the pCloud lifetime license: ~₹61,604 (~$664). Year 2 onwards: ~₹6,000/year (~$65/year). ## The Challenge The team was non-technical across the board. Any replacement requiring behavioural change — a new client, a different folder structure, a separate login — risked low adoption and a reversion to the old stack. The replacement had to feel identical to what people already used, install quietly, and stay out of the way. Storage architecture added a harder technical constraint. pCloud does not offer a POSIX mount fast enough for concurrent multi-user access with large video files. Mounting it directly as Nextcloud's data directory would have made every file operation dependent on pCloud's API latency — visible lag on every upload, download, and directory listing. The design had to fully decouple user-facing I/O speed from the cold storage backend. ## Architecture Nextcloud runs in Docker Compose on the RackNerd VPS behind a Caddy reverse proxy handling HTTPS termination on a custom team subdomain. The VPS disk is partitioned into an 80 GB local buffer that serves as Nextcloud's primary data directory — all user reads and writes hit local disk at full VPS I/O speed. A scheduled rclone job runs every 15 minutes in the background, syncing the buffer to pCloud using `rclone sync` with conflict resolution. pCloud is the durable cold store; the VPS buffer is the fast-access layer. Authentication flows through Mailcow OAuth2. Nextcloud is configured as an OAuth2 client pointing at the Mailcow endpoint already running for team mail — onboarding a new team member means one credentials step, not two. Both offices reach the Nextcloud subdomain over Tailscale, the same mesh used for mail and AI inference, with no public port exposure required for internal access. ## Key Decisions **Two-tier buffer over direct pCloud mount.** Nextcloud's native External Storage app can mount pCloud directly, but per-request API calls on every file operation cause UI lag and rate-limit risk under concurrent multi-user load. The rclone background-sync approach keeps Nextcloud fast and treats pCloud purely as a backup target, eliminating the latency coupling entirely. **rclone over Nextcloud External Storage.** The native integration is the obvious path; it was rejected specifically because of per-request API latency at the access patterns a video-heavy team generates. Running rclone as a background cron job decouples user-facing responsiveness from the sync layer completely. **Mailcow OAuth over Nextcloud LDAP.** LDAP would have required standing up an OpenLDAP or FreeIPA server — a significant additional service to maintain. Mailcow's OAuth2 endpoint was already running and required a single configuration change in Nextcloud to wire in. The SSO pattern, established for mail, extended to Nextcloud and every subsequent internal service for free. **pCloud lifetime license over recurring cloud storage.** The $599 Black Friday lifetime license amortises to near-zero over any multi-year horizon versus the $9,585/year SaaS bill it helped replace. At 22 users, the math favoured a one-time purchase decisively; the 25-day payback made the case in one slide. ## Results Google Workspace and LucidLink were fully decommissioned. All 22 team members adopted the Nextcloud desktop client without retraining — the UX is functionally identical to Google Drive Sync. Year 1 net savings: ~₹8.28 L (~$8,921) with full payback in 25 days. Year 2+ recurring savings: ~₹8.84 L/year (~$9,520/year). Adding future team members costs nothing incrementally versus $27/month + ₹864/month per new hire under the old stack. The Mailcow OAuth integration extended the studio's shared identity layer to a second internal service at zero additional infrastructure cost, establishing the pattern every subsequent tool on the Tailscale mesh would follow. ## Local AI Inference Mesh **Subtitle:** Private GPU inference across two offices **Category:** infrastructure **Tech:** Flux, Kokoro, Parrot, Whisper, Tailscale, Docker, HTTP API, Python **URL:** https://yourportfolio.example/projects/prachyam-local-ai Turned an idle Pune office GPU workstation into a private AI inference server — running Flux image generation, Kokoro TTS, Parrot audio dubbing, and Whisper transcription — shared over a Tailscale mesh to a second office, eliminating all cloud AI API spend. ## Overview Prachyam Studios produces Indian cultural and dharmic content at scale — a constant appetite for promotional art, regional-language voiceovers, audio dubs, and subtitle files. The team was paying per-call to cloud APIs: Midjourney-equivalents for image generation, ElevenLabs-class services for TTS, cloud transcription for subtitles — a combined burn rate of conservatively $100–400+/month. Meanwhile, the Pune office had GPU-capable workstations sitting largely idle. The fix was architectural: co-locate all inference workloads on the existing Pune GPU machine, serve them as simple HTTP endpoints, and route the Varanasi office's requests through the Tailscale mesh already in place for mail and file storage. The content team gets unlimited generation capacity at zero per-call cost. No cloud accounts, no egress fees, no per-character billing. Model selection was deliberately India-first. Kokoro and Parrot were chosen over English-first defaults specifically for Hindi and regional-language quality — the output gap versus generic open-source TTS was significant enough that the wrong choice would have produced unusable voiceovers. ## The Challenge The team's cloud AI spend scaled directly with output volume, which created exactly the wrong incentive: creators self-censored requests, batched generation jobs, and accepted first-pass results to avoid burning budget. Per-call pricing was suppressing iteration and hurting asset quality. The Varanasi office added a distribution constraint. Replicating models to both sites was a non-starter — Varanasi machines had no GPU capacity. A public endpoint would have exposed the inference server to the internet. The right answer was to centralise compute on Pune's GPU and tunnel Varanasi traffic through the existing private mesh, treating the GPU as a shared internal service rather than a local tool. ## Architecture Flux handles image generation — promotional posters, thumbnails, and content art — served via a local API wrapper on the Pune GPU machine. Kokoro provides high-quality multi-lingual TTS for regional Indian languages, used for narration, promos, and content previews. Parrot handles audio dubbing, converting content tracks into regional-language dubs in-house. A Whisper-family model generates subtitle files from uploaded audio, eliminating per-minute cloud transcription cost. All four workloads are served as HTTP endpoints on the Pune machine's Tailscale IP. The Varanasi office machines were already on the same Tailscale mesh used for Mailcow and Nextcloud — adding the GPU machine as another mesh node required zero additional networking configuration on the Varanasi side. Both offices hit the same endpoint URL; requests route over Tailscale; generated images and audio files return directly with no internet hop. The inference server never has a public IP. ## Key Decisions **Centralised inference over distributed.** Pune had GPU hardware; Varanasi did not. Multi-node inference orchestration across asymmetric hardware would have introduced coordination complexity with no benefit. One GPU, one endpoint base URL, routed over the mesh — operationally minimal, and latency-negligible for file-generation requests that run in seconds. **Reusing the existing Tailscale mesh.** Every internal service added to the mesh — Mailcow, Nextcloud, then the inference server — immediately became available to all connected offices without any new networking work. The mesh compounds in value with each node; the marginal cost of adding the GPU machine was near zero. **India-first model selection.** Kokoro and Parrot required hands-on evaluation against Coqui, VITS variants, and OpenVoice. For Hindi and South Indian languages, the quality gap between models was pronounced. Selecting the wrong model would have produced output the content team couldn't use — the research time was the real cost of the project. **Framing the service as unlimited.** Removing the per-call constraint changed team behaviour immediately. Creators iterated on prompts, generated alternatives, and produced higher-quality assets because there was no budget meter running. Unlimited local capacity was a product decision as much as an infrastructure one. ## Results All cloud AI API costs for image generation, TTS, transcription, and audio dubbing were eliminated — a conservative $100–400+/month in recurring spend reduced to $0/call on hardware the studio already owned. The content team gained the ability to produce regional-language voiceovers and audio dubs in-house without engaging a dubbing studio for every promotional asset. Both offices accessed the same inference endpoints transparently over Tailscale with no additional VPN setup. The Tailscale mesh pattern, proven across mail and file storage, extended cleanly to a fourth internal service — confirming it as the studio's composable private networking primitive for all subsequent infrastructure additions. ## Karmpath **Subtitle:** AI-Powered Blogging Platform **Category:** fullstack **Tech:** Next.js, TypeScript, PostgreSQL, Redis, Docker, Kubernetes, Sentry, GitHub Actions **URL:** https://yourportfolio.example/projects/karmpath **Source:** https://github.com/xczer/karmpath **Live:** https://karmpath.dharmic.cloud Architected a scalable blogging platform with AI-powered content recommendations, deployed with Docker and Kubernetes in a microservices setup. ## Overview Karmpath is a scalable blogging platform with AI-powered content recommendations. Built to handle real traffic at scale, it uses collaborative filtering and content-based algorithms to surface relevant posts to readers. ## The Challenge Most blogging platforms either can't handle traffic spikes without expensive infrastructure, or they don't leverage AI for content discovery. I wanted to build something that does both — a platform that stays fast under load and gets smarter the more people use it. ## Architecture ## Before vs After ## My Approach I designed Karmpath as a microservices architecture from the start, knowing that different components would need to scale independently: **Content Service** handles CRUD operations for blog posts, full-text search, and content delivery. It sits behind Redis caching for the most frequently accessed posts. **Recommendation Engine** runs the AI/ML models — collaborative filtering based on user reading patterns and content-based matching using post embeddings. It operates asynchronously, pre-computing recommendations during off-peak hours. **Auth Service** manages JWT-based authentication with refresh token rotation, session management, and rate limiting. **CDN Layer** serves static assets, images, and cached HTML from the edge, keeping Time to First Byte under 100ms for most users. The entire system deploys on Kubernetes with autoscaling policies — when traffic spikes, pods scale horizontally without manual intervention. CI/CD runs through GitHub Actions: lint, test, build, and deploy to the K8s cluster on every merge to main. ## Key Decisions **Why Kubernetes over serverless?** The recommendation engine needs persistent connections to the database and sustained compute for model inference. Serverless cold starts would have killed the user experience. **Why Redis for caching?** Blog content is read-heavy. A single popular post might get thousands of reads but only one write. Redis brings hot content response times from ~200ms (database) to ~5ms (cache). **Why microservices for a blogging platform?** Honestly, it would have been simpler as a monolith. But I built it this way intentionally — to prove I could architect distributed systems that actually work in production. The recommendation engine scaling independently from the content service has been the biggest win. ## Results The platform handles 10,000+ concurrent users with 99.9% uptime. Redis caching and edge optimization delivered a 60% improvement in load times. Sentry monitoring catches issues before users notice them, and the automated CI/CD pipeline means every merge to main is in production within minutes. ## Self-Hosted Mail Stack **Subtitle:** 20M emails, 12 domains, zero Mailchimp **Category:** infrastructure **Tech:** Mailcow, Postfix, Dovecot, Rspamd, Docker Compose, Tailscale, OAuth2, ClamAV **URL:** https://yourportfolio.example/projects/prachyam-mailcow Replaced Mailchimp and per-seat email SaaS for a 20-person studio by deploying Mailcow across 6 RackNerd VPS servers and 12 domains, delivering ~20 million outbound marketing emails at ~$151/year versus a $300,000 enterprise Mailchimp estimate. ## Overview Prachyam Studios needed to run large-scale marketing email campaigns against a ~350 million record dataset — a volume that sits far above Mailchimp's 200k-contact Premium cap and lands in custom-quote enterprise territory, conservatively estimated at ~$300,000 for a 6-month campaign run. The self-hosted answer was Mailcow: a Docker Compose stack bundling Postfix, Dovecot, Rspamd, ClamAV, SOGo, and a management UI into a single deployable unit. I proposed, built, and operated the full stack end-to-end. Six RackNerd VPS servers each running a Mailcow instance, 12 custom sending domains with full SPF/DKIM/DMARC configuration, and Postfix transport maps routing each domain's outbound mail through its designated server to preserve per-domain IP reputation. The same infrastructure absorbed all internal transactional mail and team inboxes for all 20 staff across both Pune and Varanasi offices. The campaign delivered ~20 million emails. Total infrastructure spend for the 6-month run: ~₹9,000 (~$97) — all-in. ## The Challenge Two problems collided on the same Postfix queue. Marketing blasts were volumetrically enormous; internal transactional messages (password resets, onboarding emails, platform notifications) were operationally critical. Left unaddressed, the bulk queue starved internal mail by hours during campaign runs. Splitting onto entirely separate server sets was the obvious fix but added operational surface and cost on a shoestring infrastructure budget. The harder constraint was operational simplicity. There was no dedicated DevOps person. The mail stack had to be self-managing enough that the rest of the team could use it without knowing it existed, while simultaneously handling campaign volumes that enterprise SaaS providers charge hundreds of thousands of dollars to process. ## Architecture The stack runs Mailcow on Docker Compose across 6 RackNerd VPS instances. Each server handles a subset of the 12 sending domains; Postfix transport maps bind each domain to its designated server and IP, keeping the domain-IP pairing stable for reputation purposes. Rotating 12 domains across 6 IPs acts as reputation insurance — a single deliverability incident cannot take down all outbound volume simultaneously. Postfix priority queuing separates bulk and transactional mail at the transport level using `transport_maps`, `smtp_destination_rate_delay`, and custom `defer_transports` to guarantee internal messages skip the bulk backlog regardless of campaign depth. Both offices connect via the existing Tailscale mesh, resolving to the same mail infrastructure without public exposure. Mailcow's built-in OAuth2 endpoint serves as the SSO identity provider for all other internal tooling, including the self-hosted Nextcloud instance, eliminating the need for a separate directory service. ## Key Decisions **Priority queue over separate servers.** A dedicated marketing server and a dedicated internal server would have solved starvation trivially, but at the cost of double the operational surface. Keeping both traffic classes on the same server per domain with queue-level separation kept log streams unified and debugging straightforward — the trade-off was more upfront configuration in Postfix transport classes. **12 domains across 6 servers for reputation distribution.** Sending reputation is a domain-IP pair reputation. Rotating sending domains distributes risk so no single blacklist event halts all outbound volume. Transport maps enforce the pairing stability that makes this strategy coherent; random assignment would defeat it. **Mailcow OAuth as team SSO.** Rather than standing up Keycloak or Authentik as a separate identity layer, Mailcow's built-in OAuth2 endpoint was repurposed as the team's single sign-on. Unconventional but pragmatic for a 20-person team: one fewer service to provision and maintain, and the username is already the work email everyone knows. **Self-hosted Mailcow over AWS SES + Listmonk.** SES + Listmonk was evaluated as the credible middle ground — estimated ~$7,500 for the same 6-month campaign run. Mailcow on owned VPS reduced that to ~$97 (a 98.5% reduction) because the only recurring cost is the RackNerd VPS subscription at ~$65/year. ## Results The 6-month campaign delivered ~20 million emails at a total infrastructure cost of ~₹9,000 (~$97), versus a ~$300,000 Mailchimp enterprise estimate and a ~$7,500 AWS SES + Listmonk projection — saving ~$299,900 and ~$7,400 respectively. Annual run-rate settled at ~₹14,000 (~$151/year) for VPS and domains combined. Internal mail starvation was eliminated: transactional messages delivered within normal SLA throughout every campaign run. The same stack replaced per-seat email SaaS for all 20 team members at no additional cost, and its OAuth endpoint became the shared identity layer for every subsequent internal service on the mesh. ## Docsee **Subtitle:** Docker Management Suite **Category:** opensource **Tech:** Rust, Tauri, Svelte, Ratatui, Bollard, GitHub Actions **URL:** https://yourportfolio.example/projects/docsee **Source:** https://github.com/xczer/docsee **Live:** https://docsee.dharmic.cloud Cross-platform Docker management tool with a Tauri desktop GUI and Rust terminal TUI, achieving sub-50ms response times for container operations. ## Overview Docsee is a lightweight, open-source Docker management tool that gives you both a desktop GUI and a terminal TUI — powered by a shared Rust core. It's everything Docker Desktop should be: fast, small, and free. ## The Challenge Docker Desktop is slow, heavy (2GB+), and requires a paid license for commercial use. The command line is powerful but not visual. There's a gap: developers need a fast, lightweight way to manage containers without the bloat. ## Before vs After ## Architecture Tauri --> Core Events --> Ratatui Events --> Core Core --> Bollard --> Docker `} /> ## My Approach The architecture splits into three layers: **Rust Core** communicates with the Docker daemon through the Bollard library — async Rust bindings for the Docker Engine API. Every operation (list, start, stop, inspect, logs) completes in under 50ms. The core handles all Docker interactions and exposes a clean API to both interfaces. **Desktop GUI** uses Tauri + Svelte. Tauri gives us a native window with web technologies inside, but without Electron's overhead. The entire GUI binary is ~8MB — compared to Docker Desktop's 2GB+. Svelte keeps the frontend reactive and fast. **Terminal TUI** uses Ratatui for rich terminal rendering. It's for power users who never leave the terminal. Real-time container stats (CPU, memory, network, disk I/O) stream directly in the terminal with instant updates. Both interfaces share the same Rust core, so behavior is identical regardless of which one you use. GitHub Actions handles CI/CD with automated multi-platform releases for Linux, macOS, and Windows. ## Key Decisions **Why Rust over Go?** Go would have been the obvious choice for Docker tooling — Docker itself is written in Go. But Rust's async model (tokio) gives genuinely better performance for the kind of concurrent operations Docker management requires. And the memory safety guarantees mean no runtime crashes from null pointers or data races. **Why Tauri over Electron?** Binary size. An Electron app starts at ~150MB minimum. Tauri starts at ~3MB. For a tool that's supposed to replace bloatware, shipping our own bloatware would be ironic. **Why both GUI and TUI?** Different workflows need different interfaces. When I'm deep in terminal work, I don't want to context-switch to a GUI. When I'm doing visual container management, I want to see everything at a glance. Both are first-class citizens. ## Results Sub-50ms response times for all container operations. An 8MB desktop app that does what a 2GB app does. A 3MB terminal tool that makes Docker management feel native. Automated releases across three platforms on every tag push. ## Local Infra **Subtitle:** 116 containers, one laptop, zero per-project setup **Category:** infrastructure **Tech:** Docker Compose, Caddy, dnsmasq, Cloudflare DNS, PostgreSQL, Temporal, OrbStack **URL:** https://yourportfolio.example/projects/local-infra **Live:** https://local-infra.dharmic.cloud A personal production-parity development mesh — 116 Docker containers, 97 HTTPS service routes, and 19 profile categories — that eliminated per-project infrastructure duplication across 6 active projects, built in 105 minutes in a single evening. ## Overview Six active projects each maintaining their own Postgres, Redis, and MinIO containers in isolation meant duplicate memory usage, credential sprawl, and a persistent HTTP-in-dev vs HTTPS-in-prod gap that made OAuth redirects, secure cookies, and service workers impossible to test locally. The fix was a shared infrastructure mesh: a single `docker-compose.yml` (3,425 lines, 116 containers) with 19 named profiles, a Caddy wildcard TLS reverse proxy covering 97 named service routes, and dnsmasq resolving all `*.dev.dharmic.cloud` subdomains to localhost. Every project gets HTTPS-secured local domains on day one. The full service catalogue covers databases, search and vectors, messaging, real-time streaming, workflow orchestration, observability (full LGTM stack), AI/LLM inference, auth (Authentik SSO), analytics, payments, billing, secrets management, CI/CD, and dev tooling — all accessible at production-quality HTTPS URLs rather than `localhost:PORT`. The entire repo was built in a single ~105-minute evening session: 8 commits between 21:09 and 22:51 on 2026-03-28. ## The Challenge The core problem was not hardware — a MacBook M1 Max with 64 GB has ample RAM for many concurrent containers. The constraint was cognitive: managing 6 separate `docker-compose.yml` files with mismatched credentials and port conflicts was becoming unmaintainable, and the HTTP/HTTPS dev-prod gap was causing bugs that could only reproduce in staging. The solution needed to work with all runtimes in use (Bun, Rust, React Native, Tauri) without IDE coupling. ## Architecture The DNS-to-TLS-to-container flow has three layers. dnsmasq (Homebrew, symlinked to `~/infra/dnsmasq/dnsmasq.conf`) resolves `*.dev.dharmic.cloud` to `127.0.0.1`, `*.co.dharmic.cloud` to the OrbStack Coolify VM, and `*.do.dharmic.cloud` to the OrbStack Dokploy VM via three wildcard address rules — no per-service DNS entries. Caddy (Homebrew, symlinked to `~/infra/caddy/Caddyfile`) obtains a single `*.dev.dharmic.cloud` wildcard certificate via Cloudflare DNS-01 ACME challenge and reverse-proxies all 97 named hosts to Docker container ports. Docker Compose provides the 116 container definitions grouped into 19 profiles (`core`, `search`, `messaging`, `workflows`, `media`, `ai`, `monitoring`, `streaming`, `analytics`, `automation`, `docs`, `customer`, `email`, `auth`, `notifications`, `platform`, `devtools`, `payments`, `internal-tools`). A `ports.yml` registry (254 lines) tracks every assigned port. A `MIGRATE.md` runbook, written as an AI-executable instruction set, automates onboarding any project from isolated Docker to the shared mesh. ## Key Decisions **DNS-01 challenge for wildcard TLS.** HTTP-01 requires public internet reachability, which is impossible for localhost. DNS-01 via Cloudflare API proves domain ownership through a TXT record — one cert, all 97+ subdomains, no browser warnings, no `--allow-insecure-localhost` anywhere. **Docker Compose profiles as the service selector.** Rather than separate compose files per project (which duplicate shared services), profiles turn service inclusion into a CLI flag. `docker compose --profile prachyam-sangam up -d` starts exactly the subset that project needs, keeping RAM proportional to active work. **`ports.yml` as machine-readable port registry.** The file tracks every reserved infra port and per-project frontend/API/worker ports in one place. Claude Code reads it during migration to find the next free port and register the new project — fully automated, zero manual port hunting. **`MIGRATE.md` as AI-executable runbook.** The file begins with a direct instruction for Claude Code to read and follow. The full migration procedure — identify shared services, reconcile credentials, restructure compose, update `.env`, register profiles — is written for an AI agent, not just a human reader. New project onboarding takes one ~20-minute session. ## Results All 6 active projects share one Postgres instance, one Dragonfly cache, one MinIO with per-project buckets, and the full observability and tooling stack. HTTPS in dev is now structurally identical to HTTPS in prod — OAuth redirects, secure cookies, and service workers all work without a staging deploy. The four-tier topology (local → Coolify staging → Dokploy staging → production) uses the same `docker-compose.yml` across all tiers, with environment differences expressed only through `.env` values. The repo replaced roughly 12 categories of paid SaaS development tooling with self-hosted equivalents running locally at zero ongoing cost. ## Custom MCP Servers **Subtitle:** Ten tools that gave Claude Code a brain **Category:** aitools **Tech:** TypeScript, MCP SDK, SQLite, Qdrant, Ollama, PostgreSQL, Docker, Tailscale, BullMQ, Prometheus **URL:** https://yourportfolio.example/projects/mcp-servers **Live:** https://mcp-servers.dharmic.cloud Built 10 purpose-built Model Context Protocol servers in TypeScript that collapsed Claude Code's context overhead by 69% while giving the AI live access to every service layer of a 9-platform OTT monorepo. ## Overview Claude Code kept hitting token limits on a 9-platform OTT monorepo. The `CLAUDE.md` file had grown to 105k characters — 29 workflow documents, schema docs, and API docs all inlined — and the model was saturating mid-session with no way to inspect live infrastructure state. The fix was to give it tools instead of text. The result is 10 custom MCP servers: TypeScript processes that Claude Code spawns on demand via stdio transport, each connecting to a specific layer of the stack — Postgres, BullMQ, MinIO, Docker, Prometheus, Grafana, feature flags, environment secrets, semantic code search, and project task state. Together they reduced `CLAUDE.md` from 105k to 32k characters while adding live infrastructure visibility that never existed before. All 10 servers are wired into `.ruler/mcp.json` so every `claude` session inside the monorepo loads the full toolset automatically. Several servers point their env vars at Tailscale mesh IPs so a session on the development iMac can reach Docker services on a remote Linux machine without SSH tunnels. ## The Challenge The iMac had 16 GB of RAM and all stateful Docker services — Postgres, Dragonfly, MinIO, Temporal, NATS, Prometheus, Grafana, Flipt — lived on two machines accessible only over Tailscale SSH. Inlining every piece of context into `CLAUDE.md` was the first solution attempted, and 105k characters was where it broke. Third-party MCP servers (filesystem, sqlite) are generic — they cannot understand a BullMQ queue topology, OTT task state, or the per-app environment variable requirements of a monorepo with 6 distinct service boundaries. The servers also had to be environment-variable–driven so `DOCKER_HOST`, `DB_HOST`, and `REDIS_HOST` could be pointed at remote Tailscale IPs at session start with no changes to server source code. The debugging loop for failing transcoding jobs — browser, SSH terminal, editor — had to collapse entirely into the coding session. ## Architecture All 10 servers use `StdioServerTransport` — each is a long-running child process spawned by Claude Code, communicating over stdin/stdout, then connecting out to its target service. Zero port conflicts, no auth layer, and the server lifecycle is tied to the Claude Code session itself. Storage is per-server and purpose-fit. `ott-context-mcp` uses a local SQLite database with 17 tables in WAL mode. `code-embeddings-mcp` stores 768-dimensional vectors in self-hosted Qdrant, generated locally by Ollama's `nomic-embed-text` model. `drizzle-studio-mcp` connects directly to Postgres and renders results as ASCII box-drawing tables in Claude's response. Tailscale bridging works by passing `DOCKER_HOST=ssh://user@` to `dockerode`, which tunnels Docker API calls over SSH automatically. The code-embeddings server uses a `BatchProcessor` (50 items, 5 concurrent), a 10k-entry LRU `EmbeddingCache`, and a `RateLimiter` to avoid overwhelming Ollama. Incremental hash-based change detection means re-indexing only touches modified files. ## Key Decisions **stdio over HTTP transport.** Zero port conflicts, no auth surface, and the server dies cleanly with the session. HTTP would be necessary for a shared team setup; for solo use it is pure overhead. **Environment-variable host injection for Tailscale.** Every server reads its target from env vars rather than hardcoded IPs. `dockerode` parses `DOCKER_HOST=ssh://user@host` natively — no manual tunnel setup required. **SQLite for `ott-context-mcp`.** Postgres already runs on a remote machine. Adding a network dependency to what must be a fast, always-available local tool was the wrong tradeoff. SQLite with WAL mode handles the concurrent read/write pattern of multiple tool calls per session without blocking. **Local Ollama, not a cloud embedding API.** Zero cost per embedding call, no data leaving the machine, and `nomic-embed-text` 768-dim is sufficient for TypeScript/TSX code search. The embedding cache and incremental indexing absorb the latency tradeoff. **Pattern learning as a first-class tool.** The `learn_pattern` tool writes structured records — mistake, correction, severity, package scope, optional auto-fix script — to SQLite. Claude Code calls it at the end of a debugging session, creating a durable correction library that persists across context resets — unlike unreliable "remember this" prompts. **CLAUDE.md reduction as a KPI.** The 69% reduction from 105k to 32k characters was the stated design target. Workflow docs load on demand via `workflow_get`; context budget is spent on code. ## Results `CLAUDE.md` shrank from 105k to 32k characters — roughly 18,250 tokens saved per session — eliminating context-limit interruptions that had been routine. Infra diagnostics that previously required a browser or SSH terminal now happen inside the coding session: when a transcoding job fails, Claude Code can inspect the BullMQ queue, check Docker logs, query Prometheus, and suggest a fix without switching context. The `ott-context-mcp` pattern-learning system means recurring error classes are flagged before they repeat. The `code-embeddings-mcp` enables natural-language code navigation across a monorepo too large to hold in context — at zero per-query cost using local Ollama inference. ## Prachyam Sangam **Subtitle:** India-first OTT platform, 9 native targets **Category:** fullstack **Tech:** TypeScript, Next.js, Elysia, Bun, Expo, Nx, Drizzle, PostgreSQL, Dragonfly, Typesense, Qdrant, Tauri **URL:** https://yourportfolio.example/projects/ott-platform Full-stack OTT streaming platform built solo in 5 months — a 21-package Nx TypeScript monorepo targeting web, iOS, Android, four TV OSes, and a desktop admin, with HLS streaming, creator monetisation, watch parties, and live TV. ## Overview Prachyam Sangam is a ground-up OTT streaming platform built for the Indian market — SVOD and AVOD content, live TV with EPG, creator channels, watch parties, and a full admin panel, all from a single TypeScript monorepo. The platform targets nine distinct surfaces: web (Next.js), iOS and Android (Expo 54), Apple TV and Android TV (Expo TV), Samsung Tizen, LG webOS, a Tauri admin desktop, and a Fumadocs developer-docs app — sharing 21 packages of business logic across every target. The project replaced a legacy OTT carrying 18 developer-years of accumulated debt, compromised SSH keys, and a CMS that blocked new features. Three evaluated ThemeForest OTT products all under-delivered on TV support and admin completeness, so the decision was made to build from scratch. What started as an employer initiative transitioned into an independent engagement with a TV-producer partner, and reached MVP across 104 numbered sprints over approximately five months of solo development. The result is a Coolify-compatible, self-hostable stack — PostgreSQL, MinIO, Dragonfly, Typesense, Qdrant, Soketi, BullMQ, Temporal, Prometheus, and Varnish — deployable to a single VPS without a DevOps hire. Zero TypeScript errors have been enforced from sprint 75 onward, and 17 of 20 viewer flows and 8 of 14 admin flows are demo-clean. ## The Challenge The core constraint was hardware: a 16 GB / 512 GB M1 iMac as the sole development machine — insufficient to run seven Docker-backed apps concurrently alongside the monorepo build. Rather than cut scope or rent a beefier machine, the problem was solved by meshing three machines via a Tailscale network, offloading Docker services to two peer nodes while the iMac handled active development. Every service runs at a stable `*.willmakeitsoon.com` HTTPS subdomain via dnsmasq and Caddy, production-mimicking the target VPS without production costs. TV platform fragmentation added significant complexity on top. Tizen 4.x, LG webOS 4.x, Apple TV, and Android TV each have distinct input models, video API surfaces, and debugging toolchains. Expo TV abstracts the React Native TV layer cleanly for Apple TV and Android TV, but the Smart TV web apps needed fully separate vanilla-JS builds — one using the webOSTVjs 1.2.10 SDK, the other the Tizen CLI toolchain — each with its own D-pad focus engine and HLS player integration. ## Architecture The monorepo is managed by Nx 22.6.1 with Bun workspaces and contains seven apps: `web` (Next.js 16, main viewer plus admin route group), `api` (Elysia on Bun — REST API with auto-generated OpenAPI via `@elysiajs/swagger`), `mobile` (Expo 54 + React Navigation), `tv` (Expo TV, Tizen, webOS), `desktop` and `admin` (Tauri 2), and `docs` (Fumadocs). All apps draw from 21 shared packages covering auth (Better Auth), database (Drizzle + PostgreSQL), cache (Dragonfly), search (Typesense + Qdrant), storage (MinIO), payments (Razorpay), messaging (NATS + Soketi), jobs (BullMQ + Temporal), streaming (HLS transcode workers), AI, i18n, and UI component sets for both web and native surfaces. The infrastructure layer runs entirely on Docker Compose, profiled per project. Varnish sits in front of MinIO and stream routes as an HTTP cache layer. Prometheus, Grafana, and Alertmanager provide observability. Flipt serves feature flags, togglable from both the admin panel and a dedicated MCP server. Semantic search combines Typesense for instant keyword facets with Qdrant vector embeddings generated locally via `@xenova/transformers`. Ten custom MCP servers bridge Claude Code to live Drizzle Studio, Prometheus metrics, Flipt, transcode job state, and a pattern-learning context store — giving every development session full project context in under 60 seconds. ## Key Decisions **Nx over Turborepo.** Nx's project-graph caching means only packages touched by a given change rebuild — critical when 21 packages would otherwise cascade. Turborepo was tested first but Nx's executor model mapped more cleanly to the Tauri build targets and cross-platform run-many scenarios. **Elysia over Express/Fastify.** Bun runtime throughput, plus Elysia's validator plugin generates OpenAPI schema from route definitions automatically. This feeds the Fumadocs swagger-docs app and reduces documentation drift to zero without a separate schema-maintenance step. **Dragonfly over Redis.** Redis-protocol compatible, so zero application code changed. Dragonfly's multi-threaded engine uses approximately 30% less memory at equivalent cache hit rates — material on a budget VPS with limited RAM. **10 custom MCP servers instead of context-switching.** DB state, transcode jobs, Prometheus metrics, and feature flags all surface in the Claude Code conversation window rather than requiring browser or terminal switches. The `ott-context-mcp` server persists learned fix patterns per package across sessions, maintaining institutional knowledge across 104 commits without a separate knowledge-base tool. **Sprint-numbered commits with priority tags.** Every non-trivial change is tagged with a sprint number and a p0–p3 priority level in the commit subject. The DEMO-PLAN.md route-readiness audit was written in under an hour because every route's last relevant sprint was instantly traceable in `git log --oneline`. ## Results Prachyam Sangam reached MVP across nine distinct platforms in approximately five months of solo development — 152 commits, 104 sprints, ~972,000 lines of TypeScript, TSX, and JS, with 21 shared packages and 10 custom MCP servers. The Tailscale mesh eliminated any hardware upgrade requirement during development. Dragonfly replaced Redis with a ~30% memory reduction at no code cost. The standalone `docker-compose.standalone.yml` makes the full stack deployable to a single VPS without a DevOps hire, and the monorepo's architecture positions it as a sellable ThemeForest starter kit once the partnership engagement wraps. ## Sutradhaar **Subtitle:** AI writing room for Indian television **Category:** aitools **Tech:** Next.js, TypeScript, tRPC, PostgreSQL, pgvector, Ollama, Vercel AI SDK, TipTap, Drizzle ORM, React PDF, BullMQ, MinIO **URL:** https://yourportfolio.example/projects/sutradhaar Built a full-stack AI director and screenplay suite for Indian TV production — TipTap screenplay editor, RAG-backed scene generation from legendary writer corpora, Navarasa-aware shot planning, and a complete pre-production management system — in a single day. ## Overview Indian TV productions run on scattered tools: Word for scripts, Excel for schedules, WhatsApp for call sheets, and no AI that understands Hindi, Navarasa theory, or the DOOD production workflow. Western tools like Final Draft and StudioBinder cover their respective slices but speak no Hindi, know nothing of bhakti symbolism, and connect nothing between the creative and logistical layers of a production. Sutradhaar is a full-stack AI writing room and production management suite built specifically for Indian television. It covers every pre-production phase: a professional TipTap screenplay editor with bilingual Hindi/English support and industry-standard auto-formatting; a RAG pipeline that generates scenes in the voice of legendary writers — Tulsidas, Shakespeare — by retrieving relevant passages from uploaded corpora; Navarasa-aware AI shot list generation; storyboard management; script breakdown with all 14 DOOD element categories; colour-coded stripboard scheduling; call sheet generation with PDF export; a budget tracker with variance highlighting; and a Recharts analytics dashboard with a 9-axis Navarasa radar chart. The entire system was designed, specced, and shipped — all 25 commits — in a single six-hour session on 2026-03-18. It exists as both a portfolio anchor and a working proof-of-concept for a TV producer partner who needs exactly this system for her upcoming production. ## The Challenge The project required genuine domain knowledge that does not exist in generic tooling. Indian TV scripts mix Hindi and English within a single block. Shot suggestions must account for all 9 Navarasa emotional states — not generic Western cinematography heuristics. Breakdown categories follow the DOOD colour-coding system with 14 defined element types. All of this had to be encoded not as runtime string matching but as first-class schema — Postgres enums, typed columns, validated structures — so every layer of the system speaks the same language without a mapping layer. The second constraint was time. The project had to be demo-able fast enough to serve as a live portfolio piece during an active job search, and functional enough to present to a real TV producer as a credible tool for her next production — not a mockup. A 675-line implementation plan written before any code was what made single-day execution possible. ## Architecture The stack is a single Next.js 16 app with the App Router. tRPC v11 handles all CRUD with full TypeScript type safety co-located with the schema; streaming AI endpoints live in Next.js API Routes. PostgreSQL 16 with the pgvector extension is managed via Drizzle ORM. MinIO provides object storage for document uploads and storyboard frames; Redis with BullMQ handles background document processing. The RAG pipeline lives in three focused files. A custom paragraph-boundary chunker greedy-merges paragraphs to 2000 characters with 200-character overlap and sentence-level sub-splitting for oversized blocks — preserving narrative context better than fixed-token chunking for literary Indian texts where a Ramcharitmanas doha often carries its meaning at the paragraph boundary. An Ollama embedder returns a zero-vector on failure rather than crashing; a pgvector retriever uses raw Drizzle SQL with the `<=>` cosine distance operator, scoped per persona, with automatic fallback to chronological fetch when Ollama is offline. The 26-table schema encodes Indian production domain knowledge as Postgres enums: `navarasaEnum` with all 9 Sanskrit emotional states, `intExtEnum`, `timeOfDayEnum`, and `sceneElementCategoryEnum` with all 14 DOOD breakdown categories. The database speaks the language of Indian film production at the type system level — every downstream layer gets that domain knowledge for free. ## Key Decisions **No LangChain — direct pgvector SQL.** LangChain adds roughly 100MB of transitive dependencies and abstracts away exactly the control points that need custom logic: zero-vector resilience, per-persona scoping, and paragraph-boundary chunking for literary texts. Three focused files are fully transparent and directly debuggable. **Custom paragraph-boundary chunker over off-the-shelf splitting.** Fixed-token chunking breaks mid-doha. The custom chunker merges at paragraph boundaries, carries overlap, and sub-splits only when a single paragraph exceeds the token budget — preserving context for the retrieval step. **`generateObject` with Zod schema for shot list AI.** The shot list route uses Vercel AI SDK's structured output mode so the response is structurally guaranteed — no parsing, no hallucinated shot shapes. Navarasa heuristics are embedded in the system prompt as natural language instructions: `veera → low angle wide + tracking`, `karuna → close-up + slow dolly`, `shringara → soft focus`. **Domain encoding in the schema, not in runtime logic.** When `navarasaEnum` exists at the DB level, the frontend renders a 9-axis Recharts radar chart without any mapping layer. When `sceneElementCategoryEnum` has 14 DOOD values, the breakdown UI is correct by construction. **Spec-first single-session execution.** A 675-line `docs/IMPLEMENTATION_PLAN.md` with exact component names, file paths, DB table names, and feature bullet points per phase was written before any code. Every architectural decision was resolved upfront; the coding session was pure execution. ## Results Sutradhaar is a demo-able full-stack application covering every pre-production phase from screenplay through call sheet, with AI assistance grounded in legendary Indian writer corpora and bilingual Hindi/English support throughout the UI. The 26-table schema, custom RAG pipeline, and all 10 tRPC routers shipped in a single day. The "Why This?" explanation engine — which cites specific dohas and corpus passages to justify any creative choice the AI makes, with source attribution — is a novel pattern for AI-assisted creative tooling that does not exist in any Western screenplay tool. The system self-hosts on any VPS via the standalone Docker Compose file, replacing a full StudioBinder subscription at zero marginal AI cost using local Ollama inference on an M-series machine. ## Claude 2x — Cross-Platform Timer Overlay **Subtitle:** Swift/SwiftUI + Rust/Tauri + QML timer overlay with timezone math, objc2 FFI, and multi-platform CI **Category:** opensource **Tech:** Swift, SwiftUI, AppKit, Rust, Tauri 2, QML, objc2, chrono-tz, tauri-plugin-positioner **URL:** https://yourportfolio.example/projects/claude-2x **Source:** https://github.com/Xczer/claude-2x A cross-platform floating timer overlay shipping on macOS (Swift/SwiftUI), Linux (Tauri/Rust), and KDE (QML) — with chrono-tz timezone math, objc2 FFI for macOS-native window behaviour, and a GitHub Actions multi-platform CI pipeline. ## Overview A floating countdown timer designed to help track Claude API session windows across timezones. Ships three separate implementations sharing the same core timezone logic: a native Swift/SwiftUI app for macOS, a Tauri/Rust app for Linux desktops, and a QML widget for KDE Plasma. objc2 FFI provides macOS-specific window presentation behaviour in the Swift build. ## Tech The macOS build uses Swift, SwiftUI, and AppKit with objc2 for native window management. The Linux build uses Tauri 2 with Rust and tauri-plugin-positioner for screen-edge snapping. The KDE build uses QML. All three share chrono-tz for accurate timezone arithmetic. GitHub Actions runs the CI matrix across macOS and Linux runners. ## DocSee Popup — Raycast-Style Docker HUD **Subtitle:** Global-hotkey Tauri popup for Docker management with frecency ranking, Rust bollard, and Unix socket IPC **Category:** infrastructure **Tech:** Rust, Tauri v2, React 19, TypeScript, Vite, Zustand, Bollard, Framer Motion **URL:** https://yourportfolio.example/projects/docsee-popup A Raycast-style global-hotkey popup for Docker — built with Tauri v2 and Rust, using Bollard for Docker API access, frecency-ranked container search, ref-counted infra management, and Unix socket IPC between the app and a background daemon. ## Overview A keyboard-triggered popup window — similar in concept to Raycast — specifically designed for Docker container and image management. Press a global hotkey from any application and a floating panel appears showing running containers with CPU/memory stats, frecency-ranked by recent access. A background Rust daemon maintains Docker state over Unix socket and notifies the UI via IPC events. ## Tech Tauri v2 hosts the Rust backend and React 19 frontend. Bollard communicates with the Docker daemon asynchronously. Frecency ranking combines recency and frequency scores to sort containers by likelihood of access. Zustand manages frontend state. Framer Motion handles panel entrance and exit animations. ## Rice Docs — macOS Developer Environment **Subtitle:** Living documentation for a keyboard-driven macOS setup — X-Type layout, OmniWM, SketchyBar, Karabiner 4-layer architecture **Category:** infrastructure **Tech:** Karabiner-Elements, OmniWM, SketchyBar, Ghostty, tmux, Neovim, Bash, Lua, Swift, JSON **URL:** https://yourportfolio.example/projects/rice-docs Complete, living documentation for a keyboard-first macOS development environment — X-Type custom layout at 213 WPM, OmniWM tiling WM with 5 workspaces, SketchyBar scripted status bar, Karabiner-Elements 4-layer key architecture, and an event-driven Neovim mode indicator. ## Overview A 5,000-line Markdown knowledge base documenting an entire keyboard-driven macOS developer environment — written to be LLM-editable with explicit instructions like "tell Claude to apply it." The X-Type 4-layer Karabiner architecture maps every symbol and WM action to home-row reach across 43 manipulators. SketchyBar subscribes to OmniWM IPC events, Neovim mode changes, and a CoreAudio CATapDescription HAL tap for an FFT visualiser item. ## Tech Karabiner-Elements intercepts physical key codes at the IOKit layer with named-variable symbol modifier layers. OmniWM provides Niri scrolling-column layout with five named workspaces and built-in quake terminal. SketchyBar renders workspace indicators, focused app name, Neovim mode, and system stats via Bash plugin scripts subscribed to system events. A Swift helper implements the CATapDescription CoreAudio tap for the cava FFT visualiser. ## X-Type Android Keyboard **Subtitle:** First native Android app — custom IME implementing the X-Type layout, built at age 16 **Category:** mobile **Tech:** Android, Java, XML, Android SDK, InputMethodService **URL:** https://yourportfolio.example/projects/x-type-android The first native Android app — a custom Input Method Editor that brought the X-Type keyboard layout to a personal Android phone, built at 16 by learning Android Java development from tutorials. ## Overview The X-Type layout existed only as a design until this app made it usable on a phone. Built by learning Android's InputMethodService API from tutorials and translating the X-Type key position spec into Android XML keyboard layout files. The app registers as a system-level input method, allowing X-Type typing across every app on the device — the first step in a multi-OS rollout that eventually reached 213 WPM. ## Tech Java with Android's InputMethodService API implements the IME lifecycle and key-event routing to the active text field. XML keyboard layout files define the X-Type row and key geometry. The app installs via sideloaded APK and registers with Android's InputMethodManager as a selectable keyboard. ## Hotel Elegent **Subtitle:** XState-driven PMS for real property deployment **Category:** clientwork **Tech:** Next.js, TypeScript, Bun, XState, Drizzle, PostgreSQL, Zustand, TanStack Query, Soketi, Razorpay, Stripe, Tailwind CSS **URL:** https://yourportfolio.example/projects/elegent-resort White-label hotel management system with an XState v5 finite-state booking engine, restaurant POS, guest CRM, digital check-in, and revenue analytics — production-deployed to a real hotel and packaged for ThemeForest. ## Overview Hotel Elegent is a full-stack hotel property management system built for a real deployment — a family hotel in Sawai Madhopur — and simultaneously packaged as a white-label ThemeForest product. It covers the full hotel operations domain: a multi-step booking engine, front desk calendar, restaurant POS, housekeeping and maintenance tracking, guest CRM with loyalty points, digital self-service check-in, concierge and guest request management, revenue analytics, and a dynamic pricing rules engine. The entire system is driven by a 46-schema PostgreSQL database, configured through an 8-step installation wizard, and deployable to a VPS with a single shell script. Commercial hotel PMS products charge ₹5,000–20,000 per month with no self-hosting option. This replaces that recurring cost with a one-time deployment on owned infrastructure, while being generic enough — white-label currency, swappable payment gateways, dual image hosting — to sell globally on ThemeForest. The same codebase serves the uncle's property in production and the ThemeForest submission ZIP. The project ran for 16 months across 23 named sprints and 195 commits, reaching 111/111 Playwright E2E tests before the ThemeForest package was assembled. ## The Challenge A 7-step booking flow involving real-time availability, guest authentication, payment, and confirmation has at least 15 distinct failure modes — payment retry, mid-flow abandonment, availability changes between steps, step navigation backward and forward. Managing this with nested `useState` and conditional renders would scatter the failure logic invisibly across components. The state had to be made explicit and testable, which meant choosing a tool designed for that job rather than improvising with React primitives. The white-label requirement added a second layer of constraint: every piece of hardcoded configuration — currency symbols, image hosting provider, payment gateway keys, branding — had to be extractable to settings without touching source. Replacing 100+ hardcoded `₹` and `INR` references with a dynamic currency formatter mid-project (Sprint 14) is the kind of work that distinguishes a "hotel template for India" from a product a hotel in Dubai can actually use. ## Architecture The stack is Next.js 15 (App Router, React 19, Turbopack) running on Bun with strict TypeScript throughout. State management follows a strict three-system rule enforced in CLAUDE.md: XState v5 owns the booking flow FSM (`bookingMachine.ts`, 280 lines — 7 states, explicit guards, error/retry, RESET and BOOK_MORE transitions); Zustand v5 owns client-side form state collected during the flow; TanStack Query v5 owns all server data fetching. The three systems do not cross boundaries — no server data in Zustand, no form state in TanStack Query, no flow logic in either. The 46 Drizzle schema files cover the complete hotel domain: rooms, bookings, folio charges, payments, housekeeping, maintenance, restaurant tables, orders, menu, pricing rules, coupons, occupancy snapshots, loyalty points, reviews, guest profiles, notes, messages, notifications, WhatsApp, audit logs, digital check-in tokens, concierge, channel manager, shift notes, staff profiles, site settings, installation config, invoices, and 2FA. Primary key convention is deliberate: UUIDs for content tables (stable, URL-safe, non-leaking) and serial integers for transactional rows (bookings, payments) where monotonic ordering benefits index efficiency and audit trails. Real-time room availability, concierge requests, and restaurant order updates flow through Soketi (self-hosted) or Pusher (production) on the Pusher protocol. Payments support Razorpay (India), Stripe (international), and PayPal, all read from environment variables — swappable without code changes. ## Key Decisions **XState v5 for the booking FSM.** The 7-step flow has too many failure modes to manage safely with imperative state. An FSM makes illegal transitions impossible by construction — jumping from `idle` to `payment` without passing through `roomSelection` and `guestDetails` simply cannot happen. Each state is independently addressable, which is what made 111 E2E tests achievable without complex setup scaffolding. **Three-state-system rule (XState / Zustand / TanStack Query — no mixing).** Each library owns a specific concern and the CLAUDE.md prohibits crossing those boundaries. This prevents the common anti-pattern of storing server data in Zustand or putting form state in TanStack Query — both of which produce subtle staleness and hydration bugs that are hard to bisect. **UUID vs serial PK discipline.** Content tables use UUIDs for stable, non-sequential IDs safe for URLs and external references. Transactional tables use serial integers for index efficiency and natural audit ordering. The split is encoded in the Drizzle schema from the first sprint, so it never had to be retrofitted. **White-label currency as a product decision.** Replacing 100+ hardcoded `₹` / `INR` strings with a dynamic currency formatter is what separates a regional template from a globally sellable ThemeForest product. A buyer in Dubai or London should not have to grep-replace currency symbols — it comes from the settings panel. **`setup-vps.sh` as a first-class deliverable.** ThemeForest buyers abandon products that require manual Nginx config and SSL setup. The one-command provisioner handles Docker install, Certbot SSL, Nginx HTTPS reverse proxy (443→3000), WSS proxy for Soketi, backup cron, and renewal cron — everything a non-technical hotel owner needs to go from a blank VPS to a running system. ## Results Hotel Elegent is production-deployed to the family hotel in Sawai Madhopur, eliminating a recurring commercial PMS cost of ₹5,000–20,000 per month. The ThemeForest submission package — 134,000 lines of TypeScript, 195 commits, 23 sprints, 46 schema files, 111/111 Playwright E2E tests — is complete. Three payment gateways, dual image hosting (Cloudinary or self-hosted Openinary), a white-label currency system, and a one-command VPS provisioner make it deployable by a non-technical buyer without a DevOps hire. ## SingleClickBlog — Faust.js + WordPress **Subtitle:** Headless WordPress blog with Faust.js, WPGraphQL, Apollo, Tiptap editor, and self-hosted Coolify deployment **Category:** fullstack **Tech:** Next.js 14, TypeScript, Faust.js, WordPress, WPGraphQL, Apollo Client, Redux Toolkit, Tailwind CSS 3.3, Tiptap 2, graphql-codegen, Vercel, Coolify **URL:** https://yourportfolio.example/projects/singleclickblog-faust **Live:** https://karan.willmakeitsoon.com A full-featured headless WordPress publishing platform using Faust.js and WPGraphQL, with Apollo Client, Redux Toolkit state management, a Tiptap rich-text editor, and automated GraphQL type generation — deployed on self-hosted Coolify. ## Overview A production headless WordPress setup combining Faust.js for the Next.js data-fetching layer with WPGraphQL on the WordPress side. graphql-codegen generates TypeScript types from the WPGraphQL schema at build time, eliminating a whole category of type errors. The Tiptap editor provides the authoring interface in the Next.js admin surface, posting back to WordPress via the REST API. ## Tech Next.js 14 with Faust.js handles routing and authentication handoff to WordPress. Apollo Client manages GraphQL data fetching and caching. Redux Toolkit stores client session state. graphql-codegen runs in CI to keep generated types in sync with the WordPress schema. Self-hosted WordPress runs on Coolify; the Next.js frontend deploys to Vercel. ## Earning Point — App Landing Page **Subtitle:** Play Store compliance HTML landing page set for a rewards app — privacy policy, terms, and support pages **Category:** clientwork **Tech:** HTML5, Tailwind CSS CDN, Bootstrap 4 **URL:** https://yourportfolio.example/projects/earning-point-homepage A set of four static HTML pages satisfying Google Play Store policy requirements for the Earning Point rewards app — privacy policy, terms of service, data deletion, and a support contact page. ## Overview Google Play Store requires apps to link to hosted privacy policy and terms of service pages before approval. This set of four static HTML pages fulfils those requirements for the Earning Point rewards app — clean, mobile-responsive, and easy to update without a build system. ## Tech Pure HTML5 with Tailwind CSS via CDN and Bootstrap 4 for responsive layout utilities. No build step, no JavaScript framework. The pages load instantly and satisfy all Play Store link verification requirements. ## TinkerUI — Local Generative UI Playground **Subtitle:** Offline generative-UI tool with Ollama streaming, sandboxed iframe preview, and a QLoRA fine-tuning pipeline **Category:** aitools **Tech:** React 19, TypeScript, Vite 7, Tailwind CSS v4, Tauri v2, Rust, Python, Ollama, QLoRA, PEFT, TRL, Qwen **URL:** https://yourportfolio.example/projects/tinker-ui A split-pane chat-plus-preview app that streams Tailwind HTML from a local Ollama LLM into a sandboxed iframe — built in one day with React 19, Vite 7, and Tauri v2, including a complete QLoRA fine-tuning pipeline for Qwen 2.5 Coder 1.5B. ## Overview TinkerUI lets you describe a UI component in natural language and see it rendered in a live sandboxed iframe as tokens stream from a local Ollama model — no cloud API, no rate limits. A streaming-aware HTML extractor handles four distinct partial-output shapes so the preview updates token-by-token without broken renders. The companion Python pipeline fine-tunes Qwen 2.5 Coder 1.5B with QLoRA on a 340-example fuzzy-deduplicated dataset and exports to GGUF for Ollama. ## Tech Vite 7 with React 19 and Tailwind CSS v4 builds the frontend. Ollama's streaming NDJSON API delivers token-by-token generation. The srcdoc iframe with Tailwind CDN isolates untrusted HTML from the app DOM. Tauri v2 wraps the app as a native desktop binary. The Python training stack uses Unsloth, PEFT, and TRL with MPS-compatible float32 training for Apple Silicon. ## Xecute — Productivity Command Center **Subtitle:** 34-panel macOS productivity HUD with Kanban, Pomodoro, SM-2 flashcards, Fuse.js search, and 45k LOC **Category:** opensource **Tech:** Tauri v2, Svelte 5, TypeScript, Rust, Tailwind CSS v4, Fuse.js, Shiki, marked, objc2 **URL:** https://yourportfolio.example/projects/xecute A single PgDn keypress summons a transparent always-on-top productivity HUD with 34 tool panels — Kanban, Pomodoro, Notes with wikilinks, Time Tracker, Finance, Flashcards with SM-2, Mind Map, and more — built with Tauri v2, Svelte 5, and objc2 NSApplication activation. ## Overview Xecute consolidates a full productivity stack — Notion, Toggl, Anki, Bear, clipboard manager, bookmark manager — into one keyboard-triggered overlay. Cross-panel integrations wire Pomodoro sessions to Time Tracker, Kanban cards to Standup auto-population, and Flashcards to a shared SM-2 spaced-repetition engine with Vocabulary. A 1,026-line SPEC.md written before any code prevented the reactivity bugs discovered on the predecessor CheaXheet from recurring across all 34 panels. ## Tech Tauri v2 with unsafe objc2/NSApplication.activateIgnoringOtherApps fixes the keyboard-focus problem under ActivationPolicy::Accessory. Svelte 5 runes with per-panel state slices prevent monolithic re-render bottlenecks across 34 domains. Fuse.js runtime-indexes all panel state at Cmd+K invocation for cross-panel search. SM-2 spaced repetition is implemented once and shared between Flashcards and Vocabulary panels. ## Prachyam Dev Mesh **Subtitle:** 3-node Tailscale mesh beats RAM limits **Category:** infrastructure **Tech:** Tailscale, Caddy, dnsmasq, Docker Compose, TypeScript, MCP SDK, Cloudflare DNS **URL:** https://yourportfolio.example/projects/prachyam-dev-mesh A 3-machine Tailscale mesh with dnsmasq and Caddy that made a 9-platform OTT monorepo buildable on a 16 GB M1 iMac by offloading all Docker services to companion machines, plus 10 custom MCP servers that cut AI assistant context load by 69%. ## Overview Prachyam Studios provided a 16 GB M1 iMac as the sole development machine for the Sangam OTT monorepo — a codebase targeting web, iOS, Android, Apple TV, Android TV, Roku, Tizen, and webOS simultaneously. Running all Docker services (databases, search, AI inference, real-time, monitoring, payments) alongside active build processes and simulators was not viable on 16 GB. The machine would saturate memory and become unresponsive. The solution was a 3-node Tailscale mesh: active development stayed on the iMac while all stateful Docker services ran on two companion machines, reachable over a WireGuard overlay network. dnsmasq provided wildcard DNS so every service got a clean `*.local` domain; Caddy terminated TLS with a wildcard certificate so browser sessions worked identically to production. No new hardware was purchased and no cloud spend was incurred. The architecture later evolved into a personal `~/infra` repository with 97 named Caddy routes and a full production-parity toolkit for solo development — a direct successor to the Prachyam mesh. ## The Challenge The hardware budget was fixed. An upgrade to a Mac Studio or Mac Pro would have cost ₹1,50,000–₹4,00,000 and required a budget approval process that wasn't on the table. Cloud dev environments were rejected due to GPU locality requirements for AI inference workloads and prohibitive cost on a volunteer salary. The constraint was real and required a topological solution, not a hardware one. ## Architecture Three machines joined a private WireGuard overlay via Tailscale. Tailscale SSH handled cross-node authentication without managing key pairs per machine. dnsmasq on each node resolved `*.local` (Prachyam era) / `*.dev.dharmic.cloud` (personal era) wildcard domains to the appropriate host. Caddy handled TLS termination via Cloudflare DNS-01 challenge, issuing a single wildcard certificate that covered all 97+ service subdomains. Docker Compose services — PostgreSQL, Dragonfly, MinIO, Typesense, Qdrant, NATS, Temporal, Authentik, Flipt, Lago, OpenTelemetry, Grafana — ran on the non-development machines. The development machine ran only active `bun dev` and compile processes. Ten custom MCP servers (TypeScript, `@modelcontextprotocol/sdk`) exposed project internals — workflow docs, API testing, semantic code search, database schema, feature flags, infra health — as on-demand tools invoked by the AI coding assistant over Tailscale SSH. ## Key Decisions **Tailscale over self-managed WireGuard.** Tailscale's coordination server handles peer discovery automatically and Tailscale SSH eliminates separate key management — critical for a 3-machine solo-operated mesh that also bridged Pune and Varanasi offices without firewall changes. **dnsmasq wildcard address record instead of per-host entries.** A single `address=/.dev.dharmic.cloud/127.0.0.1` line covers all 97+ subdomains; adding a new service requires only a Caddy route block, making the MIGRATE.md runbook fully automatable. **Caddy over nginx for TLS.** Caddy's automatic ACME support with the Cloudflare DNS plugin handles wildcard cert renewal without cron jobs or per-service cert management — essential for a rapidly expanding service catalogue. **MCP servers over a fat CLAUDE.md.** Deferring context loading until the assistant requests it reduced CLAUDE.md from 105k to 32k characters, dropped context-limit errors in long sessions to near-zero, and gave the assistant on-demand access to 29 workflow documents and 9 other context domains. ## Results The monorepo build, simulator runs, and full integration test suite ran on the constrained machine with zero hardware cost and zero cloud spend. The 10 custom MCP servers reduced AI assistant context load by 69%, which lowered per-session token cost and eliminated context-window errors during long development sessions. The pattern formalised into a personal `~/infra` with 97 HTTPS routes, and a machine-readable MIGRATE.md runbook that reduces new-project infra onboarding from two hours to approximately ten minutes. ## OTT Component Library **Subtitle:** 550+ components across web, mobile, TV, and admin **Category:** fullstack **Tech:** Next.js, React Native, Expo, Tailwind CSS, TypeScript, Drizzle ORM, PostgreSQL **URL:** https://yourportfolio.example/projects/ott-components A 4-platform atomic design system for a production OTT platform — 273 web components, 158 mobile components, 128 admin components, and a 27-module Drizzle ORM database schema — built solo in 5 days across Next.js, React Native, and TV surfaces. ## Overview The prachyam-sangam OTT monorepo targets five surfaces — web, iOS, Android, Apple TV/Android TV/Roku/Tizen/webOS, and a desktop admin panel — from a single codebase. Without a dedicated component library, each surface would duplicate UI logic and drift from the design system, repeating the same pattern that had broken the previous 18-developer Prachyam codebase. Three off-the-shelf OTT templates from ThemeForest were evaluated and rejected as incomplete or not extensible. A ground-up library was the only path to full control. Built across 145 commits in 5 days (2025-11-17 to 2025-11-22), the library covers: 273 web components + 53 pages + 23 templates (Next.js 16 + Tailwind CSS v4), 158 mobile components (Expo SDK 54 + React Native 0.81.5 plain StyleSheet), 128 admin organisms (separate Next.js app on port 3001), and a named `@ott-platform/db-schema` package with 27 Drizzle ORM domain modules. Every component renders with Faker.js-generated mock data, making the showcase page a living visual test suite without Storybook. ## The Challenge A single design system had to drive two fundamentally different rendering engines — Tailwind CSS v4 for Next.js and React Native StyleSheets — while sharing the same design tokens. TV surfaces required keyboard/D-pad focus management that web frameworks provide no native solution for, and third-party TV nav SDKs carry enterprise licensing costs. The admin bundle had to stay completely separate from the viewer-facing bundle to avoid inflating load size with heavy admin dependencies. ## Architecture Web UI uses Next.js 16 App Router with Tailwind CSS v4 CSS custom properties for design tokens and Biome 2.2 for linting. Mobile UI uses Expo SDK 54 with React Native's plain `StyleSheet` API against a shared `theme.ts` module exporting the same colour values as JS constants — avoiding NativeWind build complexity on TV targets while keeping visual parity. Admin UI is a standalone second Next.js 16 app on port 3001, keeping the admin bundle completely separate from the viewer-facing app. The database schema (`@ott-platform/db-schema`) is a versioned package with granular exports by domain so any service can import only the slice it needs. Components follow a strict atomic hierarchy: Atoms → Molecules → Organisms → Templates → Pages, enforced by directory structure with no cross-contamination. ## Key Decisions **Separate StyleSheet vs Tailwind CSS.** The Tailwind/StyleSheet split avoids NativeWind build issues on TV and desktop targets while keeping the warm terracotta palette (`primary: #c96442`) as a single source of truth across two source files — one CSS variables file and one `colors.ts` constant. **TV D-pad navigation without a library.** `TVNavigationGrid` and `TVKeyboard` handle focus through `window.addEventListener("keydown", ...)` with grid-math for arrow keys. No spatial navigation library was pulled in, keeping the bundle small and removing a dependency that would need Tizen/webOS-specific shims. **Admin as a standalone Next.js app.** Rather than stuffing admin routes into web-ui's route groups, a separate `package.json` project on port 3001 ensures the admin bundle — with heavy charting, data tables, and Docker API calls — is never shipped to end-users. **Drizzle schema as a named package with granular exports.** `import { content } from '@ott-platform/db-schema/content'` lets any service import exactly its domain slice, preventing schema drift and coupling between API server, admin, and analytics workers. ## Results One design system drives all five OTT surfaces from two token source files. The web UI is 100% complete at 273 components and 53 full page implementations. The TV focus management works on any keydown-capable TV browser with zero external dependencies. The 27-module database schema covers the full platform — auth, content, live streaming, social, billing, analytics, DRM, ads, creator monetisation, i18n, audit, referrals, and events — deployable against PostgreSQL 16 with `drizzle-kit migrate`. The library replaced three rejected commercial templates at a cost of only Karanveer's time. ## Stack Breadth Sandbox — 14+ Frameworks **Subtitle:** Systematic exploration of 14+ modern frameworks and runtimes across web, mobile, desktop, and server **Category:** opensource **Tech:** Astro, Remix, SolidJS, SvelteKit, TanStack Start, Vue, Nuxt, Expo, Flutter, Tauri, Electron, Elysia, Hono, Bun, Deno, NestJS, Turborepo, Nx, Better Auth, Drizzle ORM, Electric SQL **URL:** https://yourportfolio.example/projects/stack-breadth-sandbox A multi-year sandbox of real hello-world-to-functional projects across 14+ frameworks — Astro, Remix, SolidJS, SvelteKit, TanStack Start, Vue, Nuxt, Expo, Flutter, Tauri, Electron, Elysia, Hono, Bun, Deno, NestJS — to build genuine breadth rather than surface-level familiarity. ## Overview A deliberate multi-year investment in framework breadth — each framework taken past hello-world to something functional. The goal was to understand not just API surface but architectural philosophy: how each framework thinks about routing, reactivity, server/client boundary, and developer experience. This sandbox is the foundation behind being able to choose the right tool for each production project rather than defaulting to the familiar one. ## Tech Projects span the full modern web and app stack: Astro and Remix on the web; SolidJS, SvelteKit, Vue/Nuxt for reactivity-first SPAs; TanStack Start and Hono for edge-first servers; Expo and Flutter for mobile; Tauri and Electron for desktop; Elysia and Bun for high-performance server runtimes; NestJS for enterprise patterns; Turborepo and Nx for monorepo tooling. ## Claude Usage — macOS Menu Bar Tracker **Subtitle:** Native Swift macOS app with WidgetKit, AppIntents, and reverse-engineered Claude internal API for usage tracking **Category:** opensource **Tech:** Swift, SwiftUI, WidgetKit, AppIntents, AppKit, UserNotifications, Combine **URL:** https://yourportfolio.example/projects/claude-usage A native macOS menu bar app that tracks Claude API token consumption — using reverse-engineered internal Claude endpoints, WidgetKit for desktop widgets, AppIntents for Siri and Shortcuts integration, and Combine for reactive data flow. ## Overview A native macOS menu bar utility that monitors Claude API token usage across sessions and models. The app reverse-engineers Claude's internal usage API (accessible via the authenticated session cookie) to fetch real consumption data. WidgetKit exposes a Today widget showing current period usage. AppIntents registers Siri and Shortcuts actions for querying usage on demand. ## Tech Swift and SwiftUI build the menu bar popover and settings UI. AppKit handles the NSStatusItem menu bar icon. WidgetKit provides the lock screen and Notification Center widget. AppIntents registers usage-query intents with Siri and the Shortcuts app. Combine drives reactive updates when new usage data is fetched. UserNotifications fires alerts when approaching usage thresholds. ## DocSee Systray — macOS Docker System Tray **Subtitle:** Tauri 2 system-tray Docker manager with force-directed canvas graph, NSVisualEffectView vibrancy, and 19 commits **Category:** infrastructure **Tech:** Rust, Tauri 2.0, React 19, TypeScript, Tailwind CSS v4, shadcn/ui, Zustand, Bollard, Tokio, NSVisualEffectView **URL:** https://yourportfolio.example/projects/docsee-systray A macOS system-tray app for Docker management built with Tauri 2, Rust, and React 19 — featuring a force-directed canvas graph of container relationships, NSVisualEffectView blur via ObjC FFI, Tailwind CSS 4, and shadcn/ui. ## Overview A macOS system-tray application that surfaces Docker container state in a transparent floating panel. The panel renders a live force-directed canvas graph showing container interconnections — networks, volumes, and compose project membership as edges. True macOS vibrancy is achieved by calling NSVisualEffectView through Rust's ObjC FFI layer, matching the visual language of native macOS panels. ## Tech Tauri 2.0 with Rust handles the native macOS integration. Bollard communicates with the Docker daemon. NSVisualEffectView is accessed via cocoa and objc Rust crates for authentic background blur. React 19 with Tailwind CSS v4 and shadcn/ui builds the frontend. The force-directed graph runs on an HTML5 canvas with custom physics. ## Next.js Blog — Sanity CMS **Subtitle:** Personal blog with Sanity CMS, SWR infinite pagination, and Bootstrap 4 styling **Category:** fullstack **Tech:** Next.js 9.4.4, React 16, Sanity, SWR, Bootstrap 4, SCSS **URL:** https://yourportfolio.example/projects/nextjsblog An early personal blog built with Next.js 9 and React 16, using Sanity as the headless CMS, SWR for client-side data fetching with infinite pagination, and Bootstrap 4 for styling. ## Overview An early exploration of the Next.js + headless CMS pattern, built before Jamstack tooling had fully matured. Sanity provides the content editing interface; Next.js fetches and renders posts server-side. SWR handles infinite-scroll pagination on the listing page without a full page reload. This project established the CMS-driven blog architecture refined in later work. ## Tech Next.js 9.4.4 with getServerSideProps fetches Sanity content at request time. SWR handles client-side pagination with a stale-while-revalidate cache strategy. Bootstrap 4 and SCSS provide the responsive layout. Three years of active use produced 75 commits before the project was superseded by headless WordPress work. ## Astro Coming-Soon — Prachyam Relaunch Page **Subtitle:** Animated WebGL launch-announcement page with GLSL shaders, Supabase waitlist, and SMTP email confirmation **Category:** fullstack **Tech:** Astro, React, TypeScript, Tailwind CSS, Supabase, OGL, GLSL, Nodemailer, Netlify **URL:** https://yourportfolio.example/projects/astro-coming-soon **Live:** https://prachyam.willmakeitsoon.com/ A high-performance coming-soon page for the Prachyam platform relaunch — built with Astro SSR, OGL WebGL/GLSL shaders, Supabase Realtime waitlist capture, and self-hosted Nodemailer SMTP confirmation emails. ## Overview Designed and shipped a visually striking launch announcement page in three days to announce the Prachyam platform relaunch. OGL-powered GLSL shaders produce the animated hero background with near-zero JavaScript overhead. Supabase Realtime captures waitlist signups and Nodemailer delivers confirmation emails via a self-hosted SMTP relay. ## Tech Astro SSR provides the static-first rendering shell with React islands for interactive components. OGL (a minimal WebGL library) drives the GLSL shader animation without the Three.js bundle cost. Supabase handles the real-time waitlist database. Netlify hosts the final deployment. ## Kanishka Creations — Fashion E-commerce **Subtitle:** Full-stack React Native fashion app with GPU canvas product rendering, AI background removal, and Turborepo monorepo **Category:** mobile **Tech:** React Native 0.79+, React Native Skia, Reanimated 3, Express, Next.js 15, Drizzle ORM, PostgreSQL, Redis, MinIO, Meilisearch, rembg, Turborepo, Razorpay, Shiprocket **URL:** https://yourportfolio.example/projects/kanishka-creations A fashion e-commerce platform built in a Turborepo monorepo — React Native with React Native Skia GPU canvas, a Next.js admin dashboard, an Express/Drizzle API, Meilisearch product search, rembg AI background removal, and Razorpay + Shiprocket fulfilment. ## Overview A vertically integrated fashion commerce platform — mobile storefront, web admin, and API — all in a single Turborepo monorepo. React Native Skia renders GPU-accelerated product cards and try-on overlays. The rembg AI service removes image backgrounds from product photos automatically. Shiprocket handles order dispatch and tracking. ## Tech React Native 0.79 with React Native Skia and Reanimated 3 powers the mobile client. The backend is Express with Drizzle ORM on PostgreSQL, Redis for caching, and MinIO for media storage. Meilisearch provides typo-tolerant product search. Turborepo manages build orchestration across the monorepo packages. ## Personal Blog — 2014 **Subtitle:** CMS-powered personal blog built in Webflow at age 14 with live chat and a unified logo system **Category:** design **Tech:** Webflow, Webflow CMS, tawk.to **URL:** https://yourportfolio.example/projects/personal-blog-2014 A CMS-managed personal blog built in Webflow at age 14 — using Webflow's newly launched CMS feature, tawk.to live chat, and a custom UI design from scratch — the product that spawned the Trianglify plugin and the unified logo system still in use a decade later. ## Overview The first product shipped with real users. Designed entirely from scratch in Webflow's visual editor — not a template — using the CMS feature within weeks of its public launch. tawk.to live chat embedded via Webflow's custom code block created the first bidirectional user-author relationship surface. The need for an animated hero background couldn't be solved inside Webflow, so it was deferred and became the Trianglify jQuery plugin the following year. ## Tech Webflow CMS collections manage blog post content with dynamic page templates. tawk.to provides the live chat widget via a JavaScript embed injected through Webflow's custom code feature. The unified logo system — a master mark with a per-product derivation rule inspired by Adobe's product identity approach — was designed alongside this project and remained in active use for over a decade. ## Satta Matka — Live Results App **Subtitle:** Expo SDK 55 React Native app with Reanimated 4, self-hosted Supabase, and a custom OdometerText animation **Category:** mobile **Tech:** Expo SDK 55, React Native 0.83.2, TypeScript, NativeWind, Reanimated 4, Zustand, Supabase, Victory Native **URL:** https://yourportfolio.example/projects/satta-matka A live-results React Native app built on Expo SDK 55 and React Native 0.83.2, with Reanimated 4 animations including a custom OdometerText number roller, self-hosted Supabase on Coolify for real-time data, and Victory Native charts. ## Overview A live number-results app for the Satta Matka lottery format, with real-time data pushed from a self-hosted Supabase instance running on Coolify. The custom OdometerText component uses Reanimated 4 to animate individual digit columns like a physical odometer as new results arrive. Victory Native renders historical result charts. ## Tech Expo SDK 55 with React Native 0.83.2 provides the mobile runtime. Reanimated 4 powers the OdometerText digit-roll animation and screen transitions. Supabase Realtime delivers result updates via WebSocket subscription. NativeWind handles Tailwind utility styling. Zustand manages application and subscription state. ## Gamezop Integration — Prachyam HTML5 Games **Subtitle:** Revenue-share HTML5 game hub embedded into the Prachyam OTT platform via Flutter WebView and React **Category:** fullstack **Tech:** Flutter, React, WebView, HTML5, Dart, TypeScript **URL:** https://yourportfolio.example/projects/gamezop-integration **Live:** https://prachyam.com Integrated the Gamezop HTML5 game platform into Prachyam's mobile and web surfaces using Flutter WebView and a React embed, enabling a revenue-share casual-gaming layer for subscribers with zero custom game development. ## Overview Extended the Prachyam OTT platform with a casual gaming hub by integrating the Gamezop revenue-share platform. The Flutter mobile app hosts the game catalogue in a WebView with seamless authentication handoff, while the React web client uses an iframe-based embed. Revenue from game session ads flows back to Prachyam under a standard Gamezop revenue-share agreement. ## Tech Flutter WebView wraps the Gamezop HTML5 game catalogue for iOS and Android. A lightweight React component handles the web embed. Authentication context is passed at iframe initialisation so subscribers land directly into the game lobby without re-login. ## Rishi Talks Mobile **Subtitle:** Astrology app shipped to both stores in one day **Category:** mobile **Tech:** React Native, Expo, TypeScript, EAS Build, react-native-webview, Gamezop **URL:** https://yourportfolio.example/projects/rishi-talks **Live:** https://rishi-talks.com A production-ready React Native / Expo WebView wrapper for the Rishi Talks astrology WordPress site, with typed offline detection, Android back-button handling, and a Gamezop revenue integration — built and shipped to both platforms in a single session. ## Overview Prachyam Studios needed App Store and Play Store presence for their Rishi Talks astrology brand without rebuilding the existing WordPress site as a native app. A thin native shell delivers better mobile UX — no browser chrome, native splash screen, proper back-navigation — while all content stays managed through WordPress. The entire app was designed, built, debugged, and shipped within a single day: three commits, both Android APK and iOS builds confirmed working on 2025-10-27. The app loads `https://rishi-talks.com` in a full-screen `react-native-webview` with no browser chrome. Beyond the basic wrapper, it implements a typed cross-platform offline-state machine, automatic retry, splash screen held to first page load, link confinement, and a custom user-agent for analytics segmentation. A Gamezop HTML5 game revenue-share integration — proposed by Karanveer in a Prachyam revenue meeting — was added to both the WordPress site and the mobile app, creating a new ad revenue channel with zero game-development cost. ## The Challenge The constraint was speed: the website was already live and Prachyam needed app store presence quickly. The technical challenge was building a wrapper that felt native rather than a browser tab — handling offline states gracefully, managing Android's hardware back button, preventing the white-flash between splash and first page load, and keeping all navigation within the app rather than leaking to the system browser. ## Architecture The app is a single Expo SDK 54 screen (`app/index.tsx`) using Expo Router 6 for file-based routing. `react-native-webview` 13.15.0 handles rendering with `React Native 0.81.5` and `React 19.1`. The New Architecture is enabled (`newArchEnabled: true`) along with the React Compiler experiment. EAS Build provides three profiles — development (dev-client), preview (internal APK), and production (auto-increment for store submission). No backend or database exists in-app; `cacheEnabled: true` on the WebView uses the system HTTP cache. The offline-detection layer classifies errors using a matrix of iOS numeric codes (-2, -1009, -1003) and Android description string-matching, isolating genuine connectivity failures from HTTP errors or in-page JS errors. ## Key Decisions **Holding the splash screen to `onLoadEnd`.** `SplashScreen.preventAutoHideAsync()` is called immediately and `SplashScreen.hideAsync()` fires only when the WebView's first page load completes, eliminating the flash of unstyled content that a timer-based approach cannot avoid. **Typed error matrix instead of a single catch-all.** Checking `nativeEvent.code` against known iOS values and string-matching Android descriptions means HTTP 404s and in-page JS errors do not falsely trigger the offline screen — only genuine connectivity failures do. **OfflineScreen replaces the render tree, not overlays it.** Unmounting the WebView while offline prevents background reconnect attempts and WebView resource consumption, and makes the offline/online transition a clean state flip rather than an overlay toggle. **Gamezop over in-house games.** Prachyam had an audience but no game-development capacity. Gamezop's revenue-share model required only a JS embed on the WordPress site, which passes through the WebView transparently — zero ongoing engineering cost, new revenue line. ## Results Both Android and iOS builds were confirmed working on the same day the project started. The EAS project was handed to Prachyam Studios ready for store submission. The Gamezop integration added an ad revenue channel to both the website and the app at no development cost beyond the time to embed the SDK. The typed offline-detection approach proved robust enough that the same pattern was noted as a default for any future WebView wrapper project. ## X-Type Keyboard Layout **Subtitle:** Custom keyboard layout designed via corpus analysis for an Indian typist — 213 WPM across 4 operating systems over 11 years **Category:** opensource **Tech:** Karabiner-Elements, JSON, XKB, Java, Android SDK **URL:** https://yourportfolio.example/projects/x-type-layout A data-driven keyboard layout built from a custom Hindi/Hinglish/English corpus, beating QWERTY, Dvorak, Colemak, and Workman on the Patorjk analyzer — cross-shipped to Android, Windows, Linux, and macOS, reaching 213 WPM after 11 years of daily use. ## Overview Designed at 15 by uploading a hand-built corpus of Hindi, Hinglish, and English text to the Patorjk Keyboard Layout Analyzer and iterating key positions until X-Type beat every mainstream alternative on finger-travel and same-finger-bigram metrics. The macOS implementation uses Karabiner-Elements with two JSON files defining tap-hold dual-purpose symbol modifier layers — accessing all programming symbols without leaving home row. 11 years of daily use across four operating systems grew the typing speed from 90 WPM to 213 WPM. ## Tech The macOS implementation is a pair of Karabiner-Elements complex modification JSON files — xtype-left-symbols.json and xtype-right-symbols.json — using named variables for layer state and tap-hold dual-purpose keys with a 300ms disambiguation window. Android used Java with the InputMethodService API. Linux uses XKB. The custom corpus drove the entire optimisation rather than a standard English frequency table. ## Hygraph CMS Blog **Subtitle:** Next.js blog with Hygraph GraphQL, a hand-rolled AST renderer, and AdSense monetisation **Category:** fullstack **Tech:** Next.js 11, React, Hygraph GraphQL, Tailwind CSS 2, Vercel, google-adsense **URL:** https://yourportfolio.example/projects/hygraph-cms-blog **Live:** https://hygraph-cms-blog-4wimu2a16-xczer.vercel.app/ A monetised blog built with Next.js 11 and Hygraph GraphQL, featuring a hand-rolled AST-to-JSX content renderer, AdSense integration, and deployment to Vercel — built in four days. ## Overview A lean monetised blog exploring the Hygraph GraphQL CMS as an alternative to Sanity. The most technically interesting part is the hand-rolled AST renderer that walks Hygraph's rich-text node tree and outputs React elements without a third-party renderer library. Google AdSense integration provides display revenue on post pages. ## Tech Next.js 11 with getStaticProps and ISR fetches content from Hygraph's GraphQL endpoint at build time. The AST renderer handles headings, paragraphs, lists, code blocks, and embeds. Tailwind CSS 2 provides utility styling. Deployed to Vercel with automatic ISR revalidation. ## Travelore — Travel Companion App **Subtitle:** Full-stack React Native + Next.js travel app with Hono API, Prisma, MSW mock layer, and 68 commits in a week **Category:** mobile **Tech:** React Native, Expo SDK 52, Next.js 14, Hono, Prisma, TypeScript, NativeWind, Zustand, TanStack Query, MSW, Docker, PostgreSQL, Redis, MinIO **URL:** https://yourportfolio.example/projects/travelore A full-stack travel companion built with React Native Expo and a Next.js/Hono backend, featuring itinerary planning, offline maps, TanStack Query data fetching, MSW mock layer for test isolation, and Docker-orchestrated infrastructure. ## Overview A feature-complete travel planning app shipping a React Native mobile client alongside a Next.js web frontend, both backed by a shared Hono API server. The 127,000-line codebase includes itinerary management, offline-capable map screens, media uploads to MinIO, and a comprehensive MSW mock layer that lets the mobile app run against realistic data without a live backend. ## Tech React Native Expo SDK 52 with NativeWind provides the mobile UI. The Hono API server uses Prisma to talk to PostgreSQL and Redis for caching. TanStack Query manages all data fetching and cache invalidation. Docker Compose orchestrates the local dev stack of Postgres, Redis, and MinIO. ## 3D RAD Car Racing Game **Subtitle:** First game built at age 9 — skybox, water terrain, enemy AI, and the origin of the Xczer identity **Category:** design **Tech:** 3D RAD **URL:** https://yourportfolio.example/projects/3d-rad-car-racing A playable car-racing game built at age 9 in 3D RAD — with skybox, water-crossing track, textured terrain, and enemy AI cars — to make a social bluff true, and the project that named the Xczer identity still in use 17 years later. ## Overview A 9-year-old told his friends he could make games on his new laptop. He had zero game development knowledge. Rather than drop the bluff, he downloaded 3D RAD, worked through a PDF tutorial, and assembled a playable car-racing game with skybox, sculpted terrain, a water-crossing track, and AI opponent cars. After shipping it, the immediate next instinct was world-building — co-writing a multi-character game-story arc under school desks, naming the protagonist Xczer, the online handle still in use 17 years later. ## Tech 3D RAD is a visual plugin-graph game engine — no code written, only components configured: vehicle physics plugin, AI opponent plugin, skybox renderer, terrain sculpt and paint tools, water-plane plugin. The game compiled to a local Windows executable. Community car models were sourced and imported; level design decisions (track routing, water placement) were made entirely by the author. ## Trianglify — jQuery Low-Poly Background Plugin **Subtitle:** Procedural animated triangle-mesh jQuery plugin published on eager.io at age 15 **Category:** opensource **Tech:** JavaScript, jQuery, HTML5 Canvas **URL:** https://yourportfolio.example/projects/trianglify A jQuery plugin that generates animated low-poly triangle-mesh canvas backgrounds — built from scratch at age 15 and published on eager.io, a curated plugin platform later acquired by Cloudflare Apps. ## Overview The first piece of publicly distributed software. Built to solve a specific visual need — an animated low-poly hero background for a personal blog — rather than adapting an existing particle library. The plugin seeds random points across a canvas, connects them into a triangle mesh, and animates the mesh by continuously nudging point positions per frame. Published on eager.io, which curated plugins and required a proper manifest before listing. ## Tech A standard jQuery plugin pattern registers a method on $.fn that accepts an options object. The HTML5 Canvas 2D API renders the triangle mesh each animation frame. Point geometry and per-triangle colour assignment are computed in vanilla ES5 JavaScript. No bundler — single-file distribution through the eager.io manifest format. ## MetriX — Developer Infra Monitoring HUD **Subtitle:** 33-panel Tauri v2 + Svelte 5 macOS developer HUD with Rust sysinfo metrics, Docker management, and an embedded webhook server **Category:** infrastructure **Tech:** Tauri v2, Svelte 5, Rust, TypeScript, Tailwind CSS v4, sysinfo, Bollard, reqwest, tiny_http, Fuse.js, Shiki **URL:** https://yourportfolio.example/projects/metrix A keyboard-triggered transparent macOS HUD with 33 tool panels — system metrics, Docker management, service health, config editing, API testing, CI monitoring, and more — built with Tauri v2, Svelte 5, Rust sysinfo, and a CORS-free reqwest HTTP client. ## Overview MetriX replaces eight separate daily-driver tools with a single PgUp keypress. A Rust background thread emits system metrics via Tauri events every two seconds — zero JS polling, zero UI-thread blocking. The reqwest HTTP client inside the Rust process makes API requests without any browser CORS restrictions. An embedded tiny_http server in the Rust binary provides a webhook inspector that requires no external proxy. ## Tech Tauri v2 with ActivationPolicy::Accessory keeps MetriX invisible until the global shortcut fires. Svelte 5 runes with a single AppState class drive the 33-panel UI. The Rust backend exposes 34 command handlers covering sysinfo metrics, Bollard Docker calls, reqwest HTTP requests, shell command streaming, and the tiny_http webhook listener. Shiki provides syntax highlighting across config editor, API tester, and log viewer panels. ## Akku Healthcare — Clinic Website **Subtitle:** SEO-optimised healthcare marketing site with Schema.org structured data and perfect Core Web Vitals **Category:** clientwork **Tech:** Next.js 16.1.6, React 19, TypeScript, Tailwind CSS v4 **URL:** https://yourportfolio.example/projects/akku-healthcare A production-ready healthcare clinic website built with Next.js 16 and React 19, featuring Schema.org JSON-LD structured data, SEO metadata utilities, and a component architecture optimised for Core Web Vitals. ## Overview A marketing and information site for an Indian healthcare provider. Built with Next.js App Router and React 19 Server Components for fast static delivery. Schema.org JSON-LD markup covers MedicalOrganization, Physician, and Service entities to improve search visibility. All page metadata is generated programmatically via Next.js metadata utilities. ## Tech Next.js 16 App Router with React 19 Server Components delivers the static-first architecture. Tailwind CSS v4 handles all styling with zero runtime overhead. TypeScript enforces type safety across the codebase. The twelve-commit project was completed in five days. ## Madhur — Flutter App **Subtitle:** Offline-first Flutter app with GetX state management, Appwrite backend, and Lottie animations **Category:** mobile **Tech:** Flutter, Dart, GetX, Appwrite, GetStorage, Lottie **URL:** https://yourportfolio.example/projects/madhur-flutter A Flutter mobile app with GetX state management, Appwrite as the backend-as-a-service, GetStorage for offline-first caching, and Lottie animations — built in nine days across 19 commits. ## Overview A Flutter application exploring GetX as a lightweight alternative to BLoC for state management and routing. Appwrite provides authentication, database, and storage without a custom backend. GetStorage gives the app offline-first behaviour by persisting key data to local storage before syncing with Appwrite. Lottie animations enhance onboarding and empty-state screens. ## Tech Flutter with Dart uses GetX for reactive state, dependency injection, and named routing. Appwrite's Flutter SDK connects to a self-hosted Appwrite instance for auth and data. GetStorage handles lightweight local persistence. Lottie renders JSON animation files at runtime. ## CheaXheet — Keyboard Shortcut Overlay **Subtitle:** Global-hotkey macOS cheatsheet overlay with Fuse.js fuzzy search, SVG keyboard diagram, and live modifier glow **Category:** opensource **Tech:** Tauri v2, Svelte 5, Rust, TypeScript, Tailwind CSS v4, Fuse.js **URL:** https://yourportfolio.example/projects/cheaxheet A macOS shortcut-reference overlay built in Tauri v2 and Svelte 5 — press F19 from any app and a frosted-glass panel shows 80+ X-Type WM bindings across six tabs, with Fuse.js character-level fuzzy search, a live SVG keyboard diagram, and session-persistent window position. ## Overview CheaXheet eliminates the need to break focus and hunt through config files when a WM binding is forgotten. F19 summons a frameless frosted-glass panel from any application in under a frame. Six tabs map to the six modifier layers of the X-Type/OmniWM setup — including a live SVG diagram of the custom 65% keyboard layout that glows bound keys and auto-switches tabs when a modifier is physically held. ## Tech Tauri v2 with ActivationPolicy::Accessory removes the app from the Dock and Cmd+Tab switcher entirely. Svelte 5 runes power the reactive frontend with Fuse.js providing character-level match-index highlighting. tauri-plugin-store persists active tab, view mode, and window position across sessions. A companion parse-karabiner.ts script diffs the live Karabiner config against the app's shortcut data. ## First Team — Building a 7-Person Crew **Subtitle:** Assembled, funded, and taught a 7-person distributed team across five skill domains — then dissolved it with hard-won management lessons **Category:** opensource **Tech:** **URL:** https://yourportfolio.example/projects/first-team Self-initiated and self-funded attempt to build a 7-person specialised team from scratch in 2024 — personally teaching video editing, backend development, AI/ML, and marketing to raw beginners, then making the call to dissolve cleanly when the fundamentals did not hold. ## Overview Before the first engineering job, Karanveer bet his own savings on assembling a seven-person team to share the development burden and pursue a ThemeForest template pipeline. He designed role-specific curricula, ran training sessions across five distinct domains, and managed five named team members simultaneously — with no prior management experience. Financial pressure and collective procrastination compounded until he made the call to dissolve cleanly, which freed the energy that led directly to landing the first engineering role at Prachyam. ## Tech No code repository — this is a leadership and management narrative. The primary artefact is the management experience itself: team structure design, role-specific curriculum creation, and the decision framework for when to exit a failing initiative before it becomes a deeper liability. ## Digital Agency — Next.js 13 Marketing Site **Subtitle:** JSON/Markdown data-driven agency website with Splitting.js text animations and ScrollOut scroll effects **Category:** fullstack **Tech:** Next.js 13, React, SCSS, Formik, Swiper, Splitting.js, ScrollOut, gray-matter, remark **URL:** https://yourportfolio.example/projects/digital-agency A data-driven digital agency marketing site built with Next.js 13 Pages Router, where all content lives in JSON and Markdown files processed by gray-matter and remark — no CMS required — with Splitting.js character animations and ScrollOut scroll-triggered effects. ## Overview A content-rich agency website where all copy, case studies, and service descriptions are authored in Markdown and JSON files, processed at build time by gray-matter and remark. This eliminates the CMS dependency while keeping content editable by non-developers. Splitting.js animates individual characters on headline text; ScrollOut triggers animations as sections enter the viewport. ## Tech Next.js 13 Pages Router with getStaticProps reads all content from the filesystem at build time. gray-matter parses YAML frontmatter; remark converts Markdown body to HTML. Swiper powers the testimonial and case-study carousels. Formik handles the contact form with formsubmit.co as the backend. Tawk.to provides the live-chat widget. ## Portfolio — Karanveer Singh Shaktawat **Subtitle:** Production-grade developer portfolio and full admin CMS with AI, 3D scenes, and real-time features **Category:** fullstack **Tech:** Next.js 15, React 19, TypeScript, Tailwind CSS v4, Three.js, Drizzle ORM, PostgreSQL, Anthropic, Qdrant, Soketi, GSAP, Framer Motion, TipTap, Stripe, Docker, Vitest **URL:** https://yourportfolio.example/projects/portfolio **Source:** https://github.com/xczer/portfolio **Live:** https://dharmicdev.in A production-grade Next.js 15 portfolio and admin CMS with AI chat, RAG, 3D scenes, real-time presence, Stripe, and a complete ThemeForest-ready template — zero hardcoded data, Lighthouse 100/100/100. ## Overview A full-stack developer portfolio built as a production-quality SaaS template. Features an AI-powered chat assistant backed by RAG over project data, a 27-page admin CMS, WebGL 3D scenes, real-time presence via Soketi, and Stripe-gated premium content. Lighthouse scores 100 across performance, accessibility, and SEO. ## Tech Next.js 15 App Router with React 19 Server Components handles routing and rendering. Three.js drives the 3D hero scenes. Drizzle ORM connects to PostgreSQL for all persisted data. Qdrant powers semantic search. TipTap provides the rich-text editor in the CMS. Vitest covers the unit test suite. Docker orchestrates the local dev stack. ## DocSee TUI — Terminal Docker Manager **Subtitle:** Rust TUI for Docker container management with Ratatui, Tokio async, Bollard, and GitHub Actions CI **Category:** opensource **Tech:** Rust, Ratatui, Tokio, Bollard, Crossterm, GitHub Actions **URL:** https://yourportfolio.example/projects/docsee-tui **Source:** https://github.com/Xczer/docsee-tui A terminal UI for managing Docker containers, images, and networks — built in Rust with Ratatui for the TUI rendering, Bollard for the Docker API, Tokio for async I/O, and GitHub Actions for multi-platform CI. ## Overview A keyboard-driven terminal interface for Docker that lives entirely in the terminal without a browser or desktop window. Container list, log tailing, image management, and network inspection are all accessible with vim-style keybindings. The Bollard crate communicates with the Docker daemon over Unix socket. GitHub Actions runs the CI pipeline on macOS and Linux with Cargo caching. ## Tech Rust with Ratatui provides the TUI layout and widget rendering. Bollard wraps the Docker Engine API with native async Rust types. Tokio powers the async runtime for non-blocking Docker calls and log streaming. Crossterm handles cross-platform terminal input and raw mode. GitHub Actions builds and tests on both macOS and Linux runners. ## Sanity Blog — Content Studio **Subtitle:** Sanity v2 Content Studio with custom schema design for a structured blogging CMS **Category:** fullstack **Tech:** Sanity v2, JavaScript, React, Vercel **URL:** https://yourportfolio.example/projects/sanity-blog A Sanity v2 Content Studio configured with a custom schema for a structured blog — document types for posts, authors, categories, and rich-text blocks — shipped as a standalone studio deployment. ## Overview A standalone Sanity v2 Content Studio built to evaluate Sanity's schema design capabilities against Hygraph's GraphQL approach. Custom document types cover posts, authors, categories, and portable-text rich content. The studio is deployed independently and can back any frontend that consumes its Content Lake API. ## Tech Sanity v2 with a JavaScript schema configuration defines all document types and validation rules. React powers the Sanity Studio UI. The studio deploys to Vercel as a static site, with content delivered via Sanity's hosted Content Lake GraphQL and GROQ APIs. ## Earning Point **Subtitle:** Reward app with 5-network ad mediation **Category:** clientwork **Tech:** Java, Android, Laravel, PHP, MariaDB, Firebase, AppLovin, Retrofit, Sanctum, FCM **URL:** https://yourportfolio.example/projects/earning-point Android earn-and-redeem app where users accumulate points through videos, tasks, and offerwall surveys, backed by a full Laravel admin panel — first external freelance delivery, shipped as a turnkey client package. ## Overview Earning Point is an Android reward application backed by a full Laravel admin panel, built for an external freelance client as a white-label "GPT/reward" product — a monetisation model common in India where users complete micro-tasks for points redeemable as cash or vouchers. Users earn points by watching YouTube videos, visiting sponsored websites, completing offerwall surveys via CPX integration, spinning a lucky wheel, claiming daily check-in bonuses, and referring friends. The point balance is redeemable through admin-configured payout options, with withdrawal requests routed through a manual approval queue in the admin panel. This was the first significant external freelance engagement — client deadline, real money, and a non-technical buyer who needed a complete turnkey product. The deliverable was not just source code but a fully packaged ZIP at version 5.9, including a pre-seeded MariaDB SQL dump, signing keystore, a `How to Update.txt`, and a GitBook documentation site. That packaging discipline is what separated it from a GitHub handoff. ## The Challenge The client needed something that worked as both a functional product and a maintainable white-label kit — configurable enough that branding, API endpoints, and reward logic could be changed without touching source code. The reward API also had an obvious vulnerability: any script that could call the point-credit endpoints would drain the payout pool. The solution had to block automated farming at the device level, not just with rate limits. Choosing the ad strategy required care. AdMob alone has poor fill rates in India (a Tier 3 advertising market), which directly impacts the platform's ability to fund user point payouts. A single-network integration would have left significant eCPM on the table, but manually managing fallback logic across five SDKs would have added brittle complexity to the `AdManager` layer. ## Architecture The Android app is 318 Java source files built on Android SDK 34 (minSdk 21), using a classic Activity-based architecture with Retrofit 2 and OkHttp 3 for networking and Gson for serialisation. An `ApiClient.java` singleton and `ApiInterface.java` define all endpoints; the `API_URL` and `API_KEY` are injected at build time from `gradle.properties` via `buildConfigField`, so white-labelling requires changing two properties, not hunting through source. Auth tokens issued by Laravel Sanctum are persisted in SharedPreferences via `Session.java`. Push notifications go through Firebase Cloud Messaging. The lucky wheel spin animation runs as a local Gradle subproject (`luckyWheel` module) bundled alongside the main app, giving full control over animation timing and prize sector configuration to match the backend's `wheel_points` table. The Laravel admin panel runs on PHP 8.1 with Eloquent ORM against a 27-table MariaDB schema. Yajra Datatables handles server-side paginated table rendering; Spatie Laravel Permission manages sub-admin RBAC; Intervention Image handles avatar processing; Hammerstone Fast Paginate applies keyset-based pagination for large transaction logs. The offerwall CPX postback endpoint (`/offer_cr/{id}`) receives completion callbacks from the survey network, verifies the shared secret, and credits the `offerwall_earing` and `transaction` tables in one atomic operation. ## Key Decisions **AppLovin mediation over manual network switching.** Integrating five ad SDKs and writing fallback logic manually would have produced fragile waterfall code in `AdManager`. AppLovin's mediation layer handles waterfall ordering, timeout, and fill-rate optimisation automatically — the app calls one SDK and the mediation routes to whichever network bids highest. Standard practice for India-market consumer apps, and the right call for a client who can't afford poor eCPM on a reward platform. **Firebase App Check with Play Integrity attestation.** The obvious exploit in any reward app is scripted API calls to farm points without engaging with content. App Check requires a valid attestation token generated on a real Android device with a verified Play Store signature — emulators and sideloaded builds fail at the attestation layer before reaching rate limiting. The setup friction (Play Console SHA registration, Firebase project linking, enforcement toggling) is real, but essential for the business model's integrity. **BuildConfig API key injection.** Hardcoding the backend URL or API key in source would force a full recompile to white-label. Injecting from `gradle.properties` via `buildConfigField` means the client changes two lines to point the app at a different backend — no source editing required. **`luckyWheel` as a local Gradle module.** The spin wheel animation needed to match the backend's configurable prize sector structure exactly. A third-party library would have imposed its own data model; a local module gives full control over sector count, weights, and animation timing to stay in sync with what the `wheel_points` admin table defines. ## Results The project was delivered at version 5.9 as a complete client package — 318 Java source files, a full Laravel admin panel, a 27-table MariaDB schema, and five ad networks mediated through a single AppLovin integration. The reward API is protected against bot farming by Firebase App Check Play Integrity attestation. The delivery package included SQL dump, keystore, update instructions, and GitBook documentation, requiring zero technical hand-holding from the client. It established the template for subsequent freelance deliveries. ## Blender Animation Series — 2021 **Subtitle:** Original 3D animations made from imagination in Blender during college Year 2 — no tutorials, published on Instagram **Category:** design **Tech:** Blender **URL:** https://yourportfolio.example/projects/blender-2021 **Live:** https://www.instagram.com/ddr4_karanveer/ A series of original low-poly 3D animations made entirely from imagination in Blender 2.9x during college Year 2 — no tutorials, no copied assets — published on Instagram @ddr4_karanveer, and the creative project that triggered the full-stack coding career. ## Overview Twelve years of game-dev aspiration met a new laptop and produced this series. Every scene was conceived mentally and built from scratch — geometry, materials, lighting, camera animation — with no reference tutorial or copied node setup. Consumer hardware imposed a polygon budget that became a deliberate low-poly aesthetic. When the rendering ceiling became hard, it was read as a resource problem rather than a creativity problem: get the capital first through software development, then make the work at the intended scale. ## Tech Blender approximately version 2.93 LTS for modelling, shading, keyframe animation, and rendering. Polygonal modelling with loop cuts and extrusions. Principled BSDF materials with minimal texture maps to stay within the laptop's VRAM budget. Rendered sequences exported as MP4 and posted to Instagram @ddr4_karanveer. ## Madhur — React Native App **Subtitle:** Expo 50 app with expo-router, Tamagui, NativeWind, Zustand, and Reanimated 3 **Category:** mobile **Tech:** React Native 0.73, Expo SDK 50, expo-router, Tamagui, NativeWind, Zustand, Reanimated 3 **URL:** https://yourportfolio.example/projects/madhur-react-native A React Native prototype built on Expo SDK 50 and expo-router, using Tamagui for cross-platform UI components, NativeWind for Tailwind-in-React-Native styling, Zustand for state management, and Reanimated 3 for physics-based animations. ## Overview A React Native counterpart to the Flutter Madhur app, exploring the same product concept with the Expo ecosystem. expo-router provides file-system-based navigation similar to Next.js App Router. Tamagui unifies the design token and component layer across iOS and Android. This project benchmarked the Flutter vs React Native tradeoffs for a specific product shape. ## Tech Expo SDK 50 with expo-router handles navigation. Tamagui provides typed, cross-platform UI components with a shared theme token system. NativeWind applies Tailwind CSS utility classes to React Native components. Zustand manages application state. Reanimated 3 powers shared-element transitions and gesture-driven animations. ## KDE Plasma Widgets **Subtitle:** Native glassmorphic applets for a custom Linux rice **Category:** aitools **Tech:** QML, JavaScript, KDE Plasma 6, Qt 6, Kirigami, KCM **URL:** https://yourportfolio.example/projects/hyprland-chatbot Three native KDE Plasma 6 applets in QML — a glassmorphic Pomodoro timer with a full two-page configuration UI, a plain-theme variant, and a CPU/RAM system monitor — written against the Plasma 6 API at a time when most community examples were still targeting Plasma 5. ## Overview Off-the-shelf KDE Store widgets either look generic or lack the exact Pomodoro behaviour and aesthetic needed for a custom desktop rice — configurable session lengths, auto-start rules, and a glassmorphic glow design. Three native Plasma 6 applets were built from scratch to live in the panel exactly as needed: a glassmorphic Pomodoro timer (`org.kde.plasma.pomodorotimerglass`), a plain-theme Pomodoro variant, and a CPU/RAM system monitor widget. The glass timer is the most complete: a full 25/5/15-minute work/break cycle with automatic session switching, colour-coded progress bars (amber for focus, teal for breaks), `DropShadow` glow effects on timer text and controls, a custom `GlassButton` inline component with hover and click animations, and a pulsing `SequentialAnimation` notification overlay on session completion. Both compact (panel strip) and full (popover) representations are implemented. A two-page KDE configuration UI exposes 17 configurable properties — session durations, auto-start rules, notification toggles, appearance options — with a live preview rectangle that updates in real time as settings change. ## The Challenge The KDE community was mid-transition from Plasma 5 to Plasma 6 at the time (mid-2025), and most StackOverflow answers and KDE forum posts still referenced the Plasma 5 API. `PlasmoidItem` was new in Plasma 6 — the older Plasma 5 pattern used a plain `Item` with `Plasmoid.compactRepresentation`. The correct `cfg_*` alias binding pattern for KDE-native settings persistence, and the difference between `Qt5Compat.GraphicalEffects` and `QtQuick.Effects` for the Plasma 6 transition, both required reading first-party KDE widget source code on invent.kde.org rather than relying on community documentation. ## Architecture Each widget is a self-contained Plasma package: `metadata.json` declares the Plasma 6 minimum API version (`X-Plasma-API-Minimum-Version: 6.0`) and supported form factors (`desktop`, `panel`). The glass timer's `main.qml` (713 lines) contains the full timer logic, compact and full representations, and the `GlassButton` inline component defined using Qt 6's `component` declaration — a language feature not available in Qt 5 QML. Configuration is exposed via a `ConfigModel` with two `ConfigCategory` pages (`ConfigGeneral.qml`, `ConfigAppearance.qml`) using `KCM.SimpleKCM` and `Kirigami.FormLayout`. Settings persist via the `cfg_*` alias pattern connecting KDE's config storage system directly to QML property aliases — no explicit save/load code. A `package.sh` script zips `contents/ + metadata.json` into a `.plasmoid` archive installable with `kpackagetool6 -t Plasma/Applet -i .plasmoid`. ## Key Decisions **Targeting Plasma 6 API directly over the safer Plasma 5 target.** Writing against `PlasmoidItem` and `QtQuick.Effects` from the start avoids a future migration and signals intentional use of modern QML idioms, even though it required reading first-party KDE source rather than following community tutorials. **`GlassButton` as an inline Qt 6 `component` declaration.** Defining the button once inside `main.qml` using Qt 6's inline component feature keeps button behaviour co-located with its usage without requiring a separate file, and changes to animation timing or colours propagate everywhere from a single definition. **`cfg_*` alias pattern for config persistence.** The KDE-standard `property alias cfg_workDuration: workDurationSpinBox.value` connects the config storage system directly to QML aliases — widget settings survive Plasma restarts without any explicit save/load code. **System monitor left as a structured prototype.** The system monitor widget uses `Math.random()` for its data with an explicit comment marking where real `org.kde.plasma.system` calls would go. The layout, polling timer, and colour threshold logic are production-ready; only the data source binding is deferred — an honest "ship the structure, wire the data later" approach. ## Results All three widgets ship as installable `.plasmoid` packages ready for `kpackagetool6` on any KDE Plasma 6 machine. The glass timer is a complete, polished applet comparable to first-party KDE widgets in config-system integration — 17 configurable properties, live appearance preview, reactive property binding so config changes propagate to a running timer without restart. The project demonstrated that QML's declarative property-binding paradigm — where state changes propagate via bindings rather than explicit re-renders — is a cleaner model than React-style controlled components for persisted desktop widget settings. ## Unified Logo System **Subtitle:** Adobe-inspired master-mark-plus-derivation-rule brand system designed at 14 — still in active use a decade later **Category:** design **Tech:** **URL:** https://yourportfolio.example/projects/unified-logo-system A scalable logo system designed at 14 by reverse-engineering Adobe's cross-product visual identity approach — a single master mark with a derivation rule that produces per-product variants, still powering project identities over ten years later. ## Overview Rather than designing a one-off logo for the first personal blog, the system was designed for the entire product portfolio that didn't yet exist. The Adobe product suite was the reference — one unmistakable master mark with colour and typographic variations per product. That structure — one source of truth, derivation rules, platform-specific outputs — is the same pattern that later produced the ott-components shared library and the cross-platform monorepo architecture. Design tokens before the term entered mainstream vocabulary. ## Tech A pure design artifact — no code repository. The master mark and derivation rules were implemented in a vector design tool and applied to each new product identity as the portfolio grew. The system's longevity across 10+ years and multiple platforms is the primary evidence of its structural soundness. ## Prachyam Legacy — OTT Platform Stabilisation **Subtitle:** Forensic stabilisation and feature expansion of a production streaming platform across web, mobile, and TV **Category:** fullstack **Tech:** TypeScript, Node.js, Express, Strapi, Flutter, Dart, Go, React, MySQL, Redis, Razorpay, Stripe, PayPal, Firebase, AWS SES **URL:** https://yourportfolio.example/projects/prachyam-legacy **Live:** https://prachyam.com Took ownership of a production OTT streaming platform, stabilised a codebase spanning Node.js, Strapi CMS, Flutter mobile, and Go microservices, then extended it with multi-currency payments, gamification, and a multi-platform TV rollout. ## Overview Joined Prachyam as the first dedicated engineer and performed forensic stabilisation on a platform that had accumulated critical technical debt. Systematically resolved backend reliability issues, expanded the Flutter mobile app, introduced multi-currency payment support, and proposed and led a full architectural rewrite to a cross-platform monorepo targeting web, iOS, Android, and five TV platforms. ## Tech The existing stack combined a TypeScript/Express API, Strapi headless CMS, Flutter/Dart mobile client, and a Go-based microservice layer backed by MySQL and Redis. Payment processing spans Razorpay, Stripe, and PayPal. Firebase handles push notifications and AWS SES delivers transactional email. ## SwarSadhna — Indian Classical Music Practice Tool **Subtitle:** PWA for raga practice with real-time pitch detection, tanpura drone, and Indian classical theory scaffolding **Category:** fullstack **Tech:** Next.js 16, React 19, TypeScript, Tailwind CSS v4, pitchy, Tone.js, Zustand, Serwist, Framer Motion **URL:** https://yourportfolio.example/projects/swarsadhna A Progressive Web App for Indian classical music practice — real-time microphone pitch detection via pitchy, a Tone.js tanpura drone engine, and structured raga exercises, wrapped as a fully offline-capable PWA. ## Overview SwarSadhna is a practice tool for Indian classical vocalists and instrumentalists. The app listens to the microphone in real time using pitchy for FFT-based pitch detection, maps detected notes to the active raga's swar set, and plays a continuous tanpura drone via Tone.js. Serwist service-worker integration makes the tool fully functional offline after first load. ## Tech Next.js 16 App Router provides the SSR shell. pitchy processes Web Audio API input for low-latency pitch detection. Tone.js synthesises the tanpura drone with configurable tuning. Zustand manages session state and practice history. Serwist handles PWA caching and offline support. ## Hotel Elegent **Subtitle:** XState-driven PMS for real property deployment **Category:** clientwork **Tech:** Next.js, TypeScript, Bun, XState, Drizzle, PostgreSQL, Zustand, TanStack Query, Soketi, Razorpay, Stripe, Tailwind CSS **URL:** https://yourportfolio.example/projects/hotel-elegent **Live:** https://hotel-elegent.dharmic.cloud White-label hotel management system with an XState v5 finite-state booking engine, restaurant POS, guest CRM, digital check-in, and revenue analytics — production-deployed to a real hotel and packaged for ThemeForest. ## Overview Hotel Elegent is a full-stack hotel property management system built for a real deployment — a family hotel in Sawai Madhopur — and simultaneously packaged as a white-label ThemeForest product. It covers the full hotel operations domain: a multi-step booking engine, front desk calendar, restaurant POS, housekeeping and maintenance tracking, guest CRM with loyalty points, digital self-service check-in, concierge and guest request management, revenue analytics, and a dynamic pricing rules engine. The entire system is driven by a 46-schema PostgreSQL database, configured through an 8-step installation wizard, and deployable to a VPS with a single shell script. Commercial hotel PMS products charge ₹5,000–20,000 per month with no self-hosting option. This replaces that recurring cost with a one-time deployment on owned infrastructure, while being generic enough — white-label currency, swappable payment gateways, dual image hosting — to sell globally on ThemeForest. The same codebase serves the uncle's property in production and the ThemeForest submission ZIP. The project ran for 16 months across 23 named sprints and 195 commits, reaching 111/111 Playwright E2E tests before the ThemeForest package was assembled. ## The Challenge A 7-step booking flow involving real-time availability, guest authentication, payment, and confirmation has at least 15 distinct failure modes — payment retry, mid-flow abandonment, availability changes between steps, step navigation backward and forward. Managing this with nested `useState` and conditional renders would scatter the failure logic invisibly across components. The state had to be made explicit and testable, which meant choosing a tool designed for that job rather than improvising with React primitives. The white-label requirement added a second layer of constraint: every piece of hardcoded configuration — currency symbols, image hosting provider, payment gateway keys, branding — had to be extractable to settings without touching source. Replacing 100+ hardcoded `₹` and `INR` references with a dynamic currency formatter mid-project (Sprint 14) is the kind of work that distinguishes a "hotel template for India" from a product a hotel in Dubai can actually use. ## Architecture The stack is Next.js 15 (App Router, React 19, Turbopack) running on Bun with strict TypeScript throughout. State management follows a strict three-system rule enforced in CLAUDE.md: XState v5 owns the booking flow FSM (`bookingMachine.ts`, 280 lines — 7 states, explicit guards, error/retry, RESET and BOOK_MORE transitions); Zustand v5 owns client-side form state collected during the flow; TanStack Query v5 owns all server data fetching. The three systems do not cross boundaries — no server data in Zustand, no form state in TanStack Query, no flow logic in either. The 46 Drizzle schema files cover the complete hotel domain: rooms, bookings, folio charges, payments, housekeeping, maintenance, restaurant tables, orders, menu, pricing rules, coupons, occupancy snapshots, loyalty points, reviews, guest profiles, notes, messages, notifications, WhatsApp, audit logs, digital check-in tokens, concierge, channel manager, shift notes, staff profiles, site settings, installation config, invoices, and 2FA. Primary key convention is deliberate: UUIDs for content tables (stable, URL-safe, non-leaking) and serial integers for transactional rows (bookings, payments) where monotonic ordering benefits index efficiency and audit trails. Real-time room availability, concierge requests, and restaurant order updates flow through Soketi (self-hosted) or Pusher (production) on the Pusher protocol. Payments support Razorpay (India), Stripe (international), and PayPal, all read from environment variables — swappable without code changes. ## Key Decisions **XState v5 for the booking FSM.** The 7-step flow has too many failure modes to manage safely with imperative state. An FSM makes illegal transitions impossible by construction — jumping from `idle` to `payment` without passing through `roomSelection` and `guestDetails` simply cannot happen. Each state is independently addressable, which is what made 111 E2E tests achievable without complex setup scaffolding. **Three-state-system rule (XState / Zustand / TanStack Query — no mixing).** Each library owns a specific concern and the CLAUDE.md prohibits crossing those boundaries. This prevents the common anti-pattern of storing server data in Zustand or putting form state in TanStack Query — both of which produce subtle staleness and hydration bugs that are hard to bisect. **UUID vs serial PK discipline.** Content tables use UUIDs for stable, non-sequential IDs safe for URLs and external references. Transactional tables use serial integers for index efficiency and natural audit ordering. The split is encoded in the Drizzle schema from the first sprint, so it never had to be retrofitted. **White-label currency as a product decision.** Replacing 100+ hardcoded `₹` / `INR` strings with a dynamic currency formatter is what separates a regional template from a globally sellable ThemeForest product. A buyer in Dubai or London should not have to grep-replace currency symbols — it comes from the settings panel. **`setup-vps.sh` as a first-class deliverable.** ThemeForest buyers abandon products that require manual Nginx config and SSL setup. The one-command provisioner handles Docker install, Certbot SSL, Nginx HTTPS reverse proxy (443→3000), WSS proxy for Soketi, backup cron, and renewal cron — everything a non-technical hotel owner needs to go from a blank VPS to a running system. ## Results Hotel Elegent is production-deployed to the family hotel in Sawai Madhopur, eliminating a recurring commercial PMS cost of ₹5,000–20,000 per month. The ThemeForest submission package — 134,000 lines of TypeScript, 195 commits, 23 sprints, 46 schema files, 111/111 Playwright E2E tests — is complete. Three payment gateways, dual image hosting (Cloudinary or self-hosted Openinary), a white-label currency system, and a one-command VPS provisioner make it deployable by a non-technical buyer without a DevOps hire. ## ConfigPilot — macOS Dev Infra Config Manager **Subtitle:** Tauri 2 system-tray app for managing dnsmasq, Caddy, AeroSpace, and SketchyBar with nom parser-combinators and 63 tests **Category:** infrastructure **Tech:** Rust, Tauri 2.0, React 19, TypeScript, Tailwind CSS v4, shadcn/ui, Zustand, nom, SQLite, NSVisualEffectView **URL:** https://yourportfolio.example/projects/config-pilot **Live:** https://config-pilot.dharmic.cloud A macOS menu bar app that replaces manual config-file editing and service restarts for dnsmasq, Caddy, AeroSpace, and SketchyBar — built with Tauri 2, Rust, and React 19 in two days across 47 commits, with nom parser-combinator grammars, a 9-check diagnostics engine, and 63 passing tests. ## Overview ConfigPilot reduces a five-step manual ritual — edit dnsmasq, create resolver file with sudo, edit Caddyfile, restart two services — to a single form submission. A ConfigProvider trait abstraction backed by nom parser-combinator grammars means adding a new tool requires a Rust struct implementing six methods and zero new React components. The diagnostics engine runs nine cross-provider checks producing a 0–100 health score, and a shell-script plugin system extends support to any tool without writing Rust. ## Tech Rust with Tauri 2 handles all native macOS integration and config parsing. nom parser-combinator grammars parse dnsmasq and Caddyfile formats into typed ASTs. toml_edit surgically updates AeroSpace and SketchyBar configs preserving user comments. SQLite in WAL mode stores config history and health timeseries. NSVisualEffectView is accessed via cocoa/objc Rust FFI for authentic blur. React 19 with Tailwind CSS v4 and shadcn/ui builds the frontend. ## Unity AI Agents **Subtitle:** Six AI paradigms, one game engine, from scratch **Category:** aitools **Tech:** Unity 3D, C#, Unity ML-Agents, Python, PPO, Reinforcement Learning **URL:** https://yourportfolio.example/projects/ai-game-agents **Live:** https://ai-game-agents.dharmic.cloud Five distinct AI agents built from scratch in Unity 3D using six AI/ML techniques — Q-Learning, A*, waypoint steering, PPO neural networks, Imitation Learning, and Reinforcement Learning — culminating in a hybrid IL+PPO self-driving car agent written in C#. ## Overview A game-development course at IILM AHL (Greater Noida) required building and demonstrating AI agent behaviour inside a real-time engine. Rather than using Unity Asset Store plugins, each agent was implemented from scratch to understand the mechanics of each algorithm directly. The project covers five agents: a Q-Learning grid agent, an A* pathfinding agent with runtime graph construction, a waypoint/steering agent, a PPO-trained neural network agent, and a hybrid Imitation Learning + Reinforcement Learning self-driving car — the flagship agent. The self-driving car was the most complex. A human drove the car to produce demonstration data; Behavioural Cloning gave the network a warm-start policy from that data. PPO then fine-tuned the warmed-up policy with a custom reward signal — positive for forward track progress, negative for boundary violations and collisions — generalising the agent beyond the recorded demos to handle unseen track sections. The trained `.onnx` policy runs purely in C# at game time with no Python dependency. ## The Challenge Training a car agent from random initialisation with pure RL converges slowly because the agent must randomly stumble onto the track before discovering that forward movement is rewarded. The project also imposed a constraint of modest hardware — the same laptop that had made Blender rendering painful in 2021 — ruling out long unconstrained training runs. Every algorithm also had to be self-researched and applied without formal ML coursework. ## Architecture The project is a single Unity 3D workspace with separate C# scripts per agent and shared scene objects. The Q-Learning agent persists its Q-table across episode resets using `DontDestroyOnLoad` on its parent GameObject. The A* agent constructs a node graph from collider positions at runtime using a min-heap priority queue and Euclidean distance heuristic, re-routing dynamically when obstacles move. The PPO and car agents use Unity ML-Agents Toolkit as the bridge between the Unity simulation and the Python training backend — the `Agent` base class, sensor components, and PPO/SAC trainer. Observation spaces use fan-pattern raycasts measuring distances to boundaries, plus velocity and heading. Trained policies are serialised as `.onnx` files and loaded back into Unity via the Sentis/Barracuda runtime for inference. ## Key Decisions **Imitation Learning before RL for the car agent.** Recording human driving demonstrations and pre-training with Behavioural Cloning gave the policy a prior that already roughly knew to stay on the track, dramatically accelerating PPO convergence versus cold random initialisation — a curriculum learning approach that now informs how any fine-tuning pipeline is approached. **A* with runtime graph construction rather than Unity NavMesh.** NavMesh is faster and easier but hides the algorithm. Building the graph from collider positions at startup forced a real understanding of the data structures — priority queue, open/closed sets, heuristic selection — that the built-in tool abstracts away. **PPO over simpler RL algorithms.** PPO's clipped surrogate objective keeps policy updates within a trust region so training does not destabilise after a bad experience batch, making it robust without requiring experience replay buffers or fine-tuned learning-rate scheduling. **No asset-store AI plugins.** Every algorithm — Q-table update via the Bellman equation, A* graph construction, PPO hyperparameter tuning — was implemented and understood directly, consistent with the principle of learning by building rather than by plugging in. ## Results Five working agents across six AI/ML paradigms were delivered for the course at IILM AHL. The self-driving car agent converged to stable track navigation significantly faster than a cold-RL baseline by combining Imitation Learning warm-start with PPO fine-tuning. The project built foundational RL knowledge — observation space design, reward shaping, policy update constraints, `.onnx` inference integration — that now informs AI tooling and prompt/reward design across subsequent projects. The 14-year arc from a hand-coded car game in 3D RAD (2009) to a self-driving car NPC trained with reinforcement learning (2023) is one of the clearest through-lines in the portfolio narrative. --- # Blog Posts ## I Built a Five-Layer AI Assistant Into My Portfolio Site. Here's Every Layer. **Date:** 2026-04-27 **Tags:** ai, nextjs, vercel-ai-sdk, google-cloud, voice, engineering **Genre:** tech **URL:** https://yourportfolio.example/blog/portfolio-ai-assistant-five-layers Tool calls, visitor memory, real-time analytics, an LLM eval harness, and full-duplex voice — wired into a Next.js portfolio over six sessions. The decisions, the wrong turns, and the bugs that took longer to find than to fix. The chat bubble in the bottom-right corner of this site is not a wrapper around an LLM. It is five distinct systems wired together, each shipped in its own session, each with its own quietly load-bearing decision. This is the tour. Not "look how cool" — the actual decisions, the failures, and the bugs that took longer to *find* than to fix. ## What's Inside The assistant has five layers stacked on top of a streaming text pipeline: 1. **System prompt + LLM eval harness** — 21 live probes that re-run on every prompt change 2. **Tool calls** — three client-side tools the model can invoke to navigate the site, scroll, or open external links 3. **Visitor memory** — Dragonfly-backed, sliding 30-minute TTL, PII-redacted, reconciled by a second pass through a small fast LLM 4. **Analytics** — every turn logged to Postgres for offline debugging 5. **Voice** — push-to-talk speech-to-text and on-demand text-to-speech using Google Cloud's Chirp HD voices The streaming text pipeline underneath is the Vercel AI SDK's `streamText` → `toUIMessageStreamResponse`, with `useChat()` on the client. Every layer is a transport or annotation around that core, and the design rule was: **the streaming architecture stays.** I'd rather solve harder problems around it than rip it out. ## Layer 1: The Eval Harness Most "AI features" you see on portfolio sites are vibes. Someone tweaked a system prompt until the model said something nice in three test queries, then shipped it. Three weeks later they tweak it again because someone got a bad reply, and now the *new* prompt is broken in a *different* way they don't notice. I wanted feedback before regressions reach the live site. So before any prompt tuning, I wrote 21 live probes that exercise the assistant against the real Gemini API: ```ts // __tests__/chat-eval-live.test.ts (excerpt) describe.skipIf(!process.env.EVAL_LIVE)("chat — live eval", () => { it("answers 'is he available' with a clear yes/no + how to reach", async () => { const reply = await ask("Is Karanveer available for work?"); expect(reply).toMatch(/yes|open|available/i); expect(reply).toMatch(/email|karan@|contact/i); }); it("doesn't fabricate years of experience", async () => { const reply = await ask("How many years of experience does he have?"); // The system prompt has a numeric-fabrication guard. This test // catches when Gemini decides to ignore it anyway. expect(reply).not.toMatch(/\b\d{1,2}\+? years? of experience\b/i); }); // 19 more … }); ``` Run with `EVAL_LIVE=1 bunx vitest run __tests__/chat-eval-live.test.ts`. The whole suite takes ~50 seconds and costs about $0.04 per run. I run it before any prompt change and after. That second test — the fabrication guard — wasn't there in version one. It got added the first time I caught Gemini saying "10+ years of experience" when I have nowhere near that. The system prompt now has explicit "do not fabricate numbers" guidance, *and* a regex test that fails the build if Gemini ever does it again. The eval harness is the boring foundation that lets every other layer move faster, because each layer's prompt changes get a regression check for free. ## Layer 2: Tool Calls Three tools, all client-side: ```ts // lib/chat/tools.ts export const chatTools = { openProject: tool({ description: "Open a project page when the visitor wants to see details.", parameters: z.object({ slug: z.string() }), // Executed client-side via lib/chat/tool-runner.ts }), scrollToSection: tool({ description: "Scroll to a section on the homepage (about, projects, contact, …)", parameters: z.object({ section: z.enum([...]) }), }), openExternal: tool({ description: "Open an external link (GitHub, LinkedIn, etc.) in a new tab.", parameters: z.object({ url: z.string().url() }), }), }; ``` These are not server-side tools. Gemini *requests* them in the stream, but the actual `router.push()` or `window.scrollTo()` happens in `useChat`'s tool-result loop, on the client. That choice mattered: it meant the AI SDK didn't need to know about Next.js routing. The interesting part was the *timing*. Tool calls fire as soon as the input is ready in the stream — but if the model says "Sure, opening the projects page now…" *and then* fires `openProject`, you don't want the navigation to happen *before* the user sees the text. Otherwise it feels like a ghost dragged the page out from under them. The fix: ```ts // components/layout/ChatAssistant.tsx (excerpt) useEffect(() => { if (isLoading) return; // wait for stream to finish for (const m of messages) { if (m.role !== "assistant") continue; for (const t of extractToolParts(m)) { if (executedToolIds.current.has(t.toolCallId)) continue; executedToolIds.current.add(t.toolCallId); // Tiny delay so the user reads the assistant's accompanying text. setTimeout(() => runTool(...), 600); } } }, [messages, isLoading]); ``` Two invariants: - Wait for `isLoading` to be `false` (stream done). - Track executed tool IDs in a `useRef` set so re-renders don't re-fire. - Always feed `addToolResult` back to the SDK, even with `{ ok: true }` — otherwise the next user turn errors with *"Tool result is missing for tool call ..."*. That third one cost me an hour. Read the whole AI SDK source to find it. ## Layer 3: Visitor Memory This was the layer where the real architectural decision lived: where does memory live, and *when does it get written*? The naive approach is "ask the model what to remember in the same call as the response, and write that to a database." That has two problems. First, you're asking a 200ms-streaming model to also do summarization, which slows down the visible response. Second, if the model decides to remember "user's email is alice@example.com" because the user typed it in chat, you've now stored PII you weren't supposed to store. So memory is **a second pass, after the visible stream finishes:** ```ts // app/api/chat/route.ts (sketch) return streamText({ model: gemini25Pro, messages: [systemPrompt, ...history], onFinish: async ({ text }) => { // Visitor sees the stream finish here. Now do background work. await Promise.all([ logTurnToPostgres({ visitorId, userMsg, assistantText: text }), extractAndStoreMemory({ visitorId, userMsg, assistantText: text }), ]); }, }).toUIMessageStreamResponse(); ``` Memory extraction uses **Groq llama-3.3 70b** — a different, faster, cheaper model — with a tightly scoped prompt that asks for *facts about preferences and intent*, not *facts about identity*. The output is run through a regex defence layer that strips anything looking like an email, phone number, or proper name before it reaches Dragonfly: ```ts // lib/chat/memory.ts (excerpt) const PII_PATTERNS = [ /\b[\w.+-]+@[\w-]+\.[\w.-]+\b/g, // emails /\b\+?\d[\d\s().-]{7,}\b/g, // phone-ish /\b[A-Z][a-z]+(?:\s+[A-Z][a-z]+){1,2}\b/g, // proper-name-ish ]; function redactPII(text: string): string { return PII_PATTERNS.reduce((acc, re) => acc.replace(re, "[redacted]"), text); } ``` Storage is a Dragonfly key per visitor, gated by a `pf_vid` cookie, with a 30-minute sliding TTL — if a visitor stops chatting, the memory naturally expires. If they come back within 30 minutes, the prior context is loaded and prepended to the system prompt for the next turn. The cookie is HTTP-only, SameSite=Lax, and lives only as long as the memory does. It is not a tracking cookie. There's nothing in it but a UUID that points at a TTL'd key. ## Layer 4: Analytics This one is short. Every turn — user message, assistant text, tools fired, latency — gets a row in `ai_chat_turns`: ```sql -- drizzle/0013_shiny_quicksilver.sql CREATE TABLE ai_chat_turns ( id serial PRIMARY KEY, visitor_id text NOT NULL, user_message text NOT NULL, assistant_text text NOT NULL, tools_called jsonb, latency_ms integer, created_at timestamp DEFAULT now() ); ``` Written from the same `onFinish` callback as memory. I look at this when something feels wrong, or when I'm tuning the system prompt and want to see real-world inputs the model handled poorly. It's not a metrics dashboard. It's a debug log I can grep with SQL. ```bash docker exec infra-postgres psql -U postgres -d portfolio \ -c "SELECT * FROM ai_chat_turns ORDER BY created_at DESC LIMIT 5;" ``` Cheap to add, surprisingly useful when prompt-tuning. ## Layer 5: Voice This is the one that justifies the post. Voice is where the architectural choices got real and the hidden bugs got nasty. ### The choice: Whisper local, or Google Cloud? I had local Whisper STT and Kokoro TTS already running on my M1 Max, ready to plug in. I almost did. Then I remembered: this site runs on Vercel, not on my laptop. Local models would need a separate inference server. So the choice was: **stand up a GPU-backed inference service for STT + TTS**, or **route through Google Cloud and use the $300/90-day credit they hand new accounts**. I'd never used the Google Cloud credit. Cost math came out clearly in favour of cloud: - **STT** (Speech-to-Text v1, `latest_short` model): ~$0.024/min after a 60-min/month free tier - **TTS** (Text-to-Speech, Chirp 3 HD voices): ~$30/1M chars after 1M chars free for the first year A typical voice turn is ~5 seconds of audio in and ~80 words out. Per turn cost: $0.002 + $0.014 = $0.016 — roughly one and a half cents. The $300 credit buys ~18,750 voice turns. Spread over 90 days: 208 voice turns per day. A portfolio site doesn't get that traffic. I'd never actually pay anything. Decision: cloud. ### The choice: Gemini Live API, or separate STT + TTS? Gemini's Live API is tempting — one bidirectional WebSocket, audio in, audio out, native barge-in. It's the same API that powers the Gemini app's voice mode. I rejected it. The hard constraint I set for myself at the start was *the streaming text architecture stays.* Gemini Live is a parallel pipeline. Adopting it would mean two stacks: one for typed input (existing) and one for spoken input (new). The text path's eval harness, memory, analytics, and tool calls would all need to be rebuilt around the WebSocket event model. So: **separate STT and TTS, wrapping the existing text pipeline.** Speech becomes a transport layer over text, not a replacement for it. ### Push-to-talk, not continuous Continuous voice mode (always listening, voice-activity detection cuts off your turn) is what the major assistants do. I considered it and walked away. PTT is unambiguous: hold button → speak → release → transcript drops in input. The user can edit before sending. It maps cleanly onto the existing `useChat().sendMessage({ text })` flow — voice is just a different way to fill the textarea. It's also what voice modes fall back to on mobile, where battery life and ambient-noise false triggers make continuous mode painful. PTT is implemented with pointer events: ```tsx ``` Pointer events handle mouse + touch from the same path. `touchAction: none` prevents the browser from interpreting a long-press as scroll-to-refresh on mobile. `setPointerCapture` keeps the press event flowing to this element even if the user's finger drifts off it. ### TTS: per-message, not per-token Streaming TTS at sentence boundaries gives lower latency to first audio. I considered it. I shipped per-message instead. Reasons: - The system prompt has a 120-word cap. Complete responses are 3–5 seconds of audio. - Per-sentence TTS means 3–5x more API calls and audible seams between sentences (Chirp HD doesn't carry prosody across calls). - Per-message means ~600 ms of TTS latency before audio starts. Acceptable. The endpoint is straightforward — except for one trap. ### The MPEG-2 bug First implementation: ask Google for `MP3` at 24 kHz, return `Content-Type: audio/mpeg`, play it in `