On-device legal assistant for Android with fully local LLM inference — no server, no data leaks.
Legal questions in India are complex and inaccessible. Cloud-based AI leaks sensitive legal queries to third-party servers.
Built a fully on-device Android app with a quantized LLM running locally. Zero network calls for inference. Privacy-preserving by architecture.
Quantized LLM model bundled with the APK. On-device inference via Android NDK bindings. Query preprocessing and context injection. Response streaming to chat UI.
100% on-device inference. No data transmitted to any server. Viable on mid-range Android hardware.
