Building intelligent systems that think. LLM integrations, RAG pipelines, and cloud-native architectures — from idea to production at BrandCloud Inc.
I'm Sakib — a full-stack software engineer with 4+ years building production systems. I currently work in an engineering leadership role at BrandCloud Inc. (NDI Division) in Tokyo, Japan.
My work sits at the intersection of AI/LLM integration and product engineering. I design RAG pipelines, build API-first services, and deploy on AWS, Google Cloud, and Cloudflare with Docker and CI/CD.
I mentor engineers through code review and architecture decisions, and I focus on reliable delivery, clean design, and measurable product outcomes.
From LLM fine-tuning to Flutter apps. I go deep where it matters.
4+ years across backend, AI integration, and engineering leadership.
Leading technical strategy for the NDI division. Oversee a 12-engineer team, define architecture standards, and drive AI-first product delivery. Built KnowledgeLinks AI — an enterprise RAG system using OpenAI, Claude, and Gemini. Own AWS infrastructure and CI/CD practices to keep delivery reliable and scalable.
Took ownership of complex full-stack features and systems. Led development of the Benrimono on-demand delivery platform, including the Flutter app, serverless AWS backend, and real-time tracking. Introduced early LLM capabilities for internal tools and customer-facing workflows, and helped standardize code review and design patterns across the team.
Joined BrandCloud as a full-time software engineer and became a core contributor across the stack. Built and maintained Django REST APIs, developed UI features with React and Vue.js, and supported PostgreSQL/Redis-backed systems. Worked with product and design teams to ship customer-facing features while improving testing and deployment workflows.
Completed an intensive technical and professional training program in Japan, building engineering discipline and cross-cultural communication skills. Developed a structured approach to teamwork and software delivery that I carried into later engineering roles.
First professional engineering role. Gained hands-on experience with Go (Beego), Flask, Node.js, Express, Vue.js, and REST APIs. Delivered production features under real deadlines and strengthened core habits in clean coding, fast learning, and team collaboration.
Production systems across AI, mobile, and cloud — solving real problems.
Enterprise AI knowledge management system. Employees query internal documents via multi-LLM RAG pipelines and get instant, cited answers. Deployed on AWS with real-time streaming responses.
Automated daily short-form video pipeline that detects trending topics for Bangladesh & Japan, generates scripts with Gemini, and produces FFmpeg slideshows — fully hands-free content creation.
AI-powered lead intake system that processes inbound leads via Make.com workflows, enriches them with OpenAI, and syncs structured data directly into Google Sheets as a lightweight CRM.
Production-style RAG backend for an internal support copilot — ingests documents, retrieves relevant chunks, and returns grounded answers with citations via a FastAPI service. Dockerized with CI.
Backend for AI-assisted workflow orchestration with approval gates, execution tracking, and retry handling — agents and human reviewers collaborate in a single pipeline via FastAPI. Dockerized with CI.
On-demand delivery and errand app connecting users with gig workers across iOS and Android. Real-time tracking, push notifications, and serverless AWS backend at scale.
Production-ready REST API in Go with full Swagger docs, PostgreSQL, robust validation, and structured error handling.
Automated image and structured data extraction from URLs with dynamic schema creation and intelligent duplicate detection.
No-code form generator with custom field types, conditional logic, and real-time data management dashboard.
eBay-like auction platform with real-time bidding, watchlists, and category-based discovery — built on Django.
Open to AI engineering projects, leadership roles, and interesting conversations.
Whether you have a project that needs AI integration, a team that needs technical leadership, or just want to talk about the future of LLMs — my inbox is open.