Independent ML Researcher
Research: https://inductive.ml
Email: nakulparmar15@gmail.com
GitHub: https://github.com/avp1598
Location: Bangalore, India
I work on representation learning, parameter-efficient fine-tuning, LoRA geometry, adapter canonicalization, small-data generalization, and LLM systems.
I study how independently trained LoRA adapters can be compared under the gauge freedom of low-rank factorization. My current work investigates whether canonicalized adapter updates reveal seed-stable task representations across random seeds, ranks, and model scales.
I am exploring frozen-decoder architectures with learned concept slots, controller modules, and controlled attention injection for compositional reasoning and retrieval-like behavior.
I study whether models can infer structured tool-use behavior from limited input-output examples, and when symbolic composition, constrained decoding, or learned adapters dominate.
- Independent ML researcher
- Software engineer
- Former student at Manipal
- Research interests: LoRA geometry, representation learning, LLM adaptation, interpretability, sparse inference, and AI systems
- GitHub: https://github.com/avp1598



