Aaryan Patwardhanavailable
AboutProjectsSkillsContact

AI Systems Engineer

Aaryan Patwardhan

I build systems that see, decide, and heal themselves.

View ProjectsGitHub ↗

About

Systems thinker.
Vision AI and autonomy.

B.E. Information Technology student at SPPU (2027). I design autonomous AI pipelines — from real-time computer vision at 55fps to self-healing drone fleets. I work at the intersection of deep learning, systems design, and edge inference.

Pune, India·IST (UTC+5:30)·< 24 hours response
sentinel-mesh — live inference

Experience

What I've built

AI Core Lead

2026

SentinelMesh — INSPIRON 5.0 (CSI COEP)

  • —Designed MAPE-K autonomic loop for self-healing autonomous drone fleet
  • —YOLOv8n at 55fps on RTX 3050 Ti with dual CUDA co-inference (vision + LLM)
  • —Built Adversarial Debate Engine: Agent-A Dispatcher vs Agent-B Skeptic before any dispatch
  • —Implemented confidence tier gating — LLM bypassed at conf ≥ 0.88 for ~50ms dispatch

AI/ML Engineer

2026

Ghost-Admin — Autonomous Server Healing Agent

  • —Designed MAPE-K feedback loop for autonomous server diagnosis and remediation
  • —Integrated local LLM inference via llama-cpp-python with CUDA acceleration

Backend Engineer

2026

Upwork Automation Pipeline

  • —Built two-version automated job-hunting pipeline with LLM scoring and Telegram alerts
  • —V2: composite scoring (50% AI relevance, 25% client quality, 25% competition opportunity)

Projects

Things I've shipped

SentinelMesh
Vision AIAutomationSystems

SentinelMesh

Autonomous drone fleet that detects, debates, and dispatches — without human input.

55fps YOLOv8n< 50ms dispatchRTX 3050 TiMAPE-K loop
Autonomous Upwork Pipeline
AutomationBackend

Autonomous Upwork Pipeline

Automated job-hunting with LLM scoring, client quality signals, and Telegram delivery.

Composite scoringTelegram alertsZero manual reviewSQLite dedup
Ghost-Admin
AutomationSystemsBackend

Ghost-Admin

Server healing agent that monitors, diagnoses, and self-repairs without human input.

MAPE-K loopLocal LLMZero-downtime
PocketLawyer Edge AI
AutomationSystems

PocketLawyer Edge AI

On-device legal assistant for Android with fully local LLM inference — no server, no data leaks.

On-device LLMAndroidZero API calls
PPE Detection System
Vision AI

PPE Detection System

Real-time safety compliance detection for industrial environments at 60fps.

60fpsYOLOv8Real-time alerts
Student Attendance Analytics
Full-StackBackend

Student Attendance Analytics

Automated attendance monitoring with analytics dashboard for educational institutions.

SQLiteAnalytics dashboardExport reports
Full-Stack E-Commerce Platform
Full-StackBackend

Full-Stack E-Commerce Platform

End-to-end online store with inventory management, cart, and order processing.

Python/FlaskREST APIFull cart flow

Skills

The stack

Hover the constellation nodes to explore skill co-occurrences

PythonYOLOv8OpenCVCUDA / cuDNNFish / Bash ShellArch Linux / GarudaPyTorchllama-cpp-pythonFlaskFastAPIReactSQLiteAndroid

Vision AI

YOLOv82p

55fps real-time detection on edge hardware

OpenCV2p

Real-time frame pipeline with CUDA-accelerated preprocessing

Machine Learning

PyTorch2p

Custom training pipelines, mixed precision, CUDA streams

llama-cpp-python2p

Local GGUF inference, Qwen2.5-1.5B Q4_K_M at production latency

Languages

Python7p

Primary language across all ML and backend work

Backend

Flask2p

Full-stack Python/Flask with REST APIs and template rendering

FastAPI1p

Async WebSocket server for real-time drone fleet coordination

React2p

Leaflet.js dashboard with WebSocket real-time marker updates

SQLite3p

Lightweight persistent dedup and feedback storage

Systems & Infra

CUDA / cuDNN3p

Dual CUDA inference: YOLOv8 + GGUF LLM simultaneously on RTX 3050 Ti

Fish / Bash Shell2p

Scripted automation and system tooling on Garuda Linux / Arch

Arch Linux / Garuda1p

Primary development environment; deep kernel and driver familiarity

Mobile

Android1p

Edge AI app with local LLM inference on-device

Contact

Let's build something.

Available for freelance AI/ML engineering work. Response time: < 24 hours.

hello@aaryanpatwardhan.devGitHub ↗LinkedIn ↗
Built with Next.js · Deployed on Vercel