
Distributed coverage and controls work in simulation and Crazyflie hardware experiments
I work on distributed coverage and control problems in the UCSD Multi-Agent Robotics Lab and test how they hold up in both simulation and hardware. Most of the work is in the loop between ROS feedback, tuning, logging, and seeing what changes once the controller leaves the clean simulation case.

Coverage performance depends heavily on localization quality, so hardware runs can diverge quickly from clean simulation assumptions.
Controller gains that look stable in one scenario do not always transfer across disturbances, battery state, or platform setup changes.
I use ROS and Python tooling to make runs easy to log, compare, and debug rather than treating hardware experiments as one-off demos.
The work combines centroid control, coverage logic, and repeatable experiment structure so controller changes can be tied to measured behavior.
I compare simulation traces against Crazyflie hardware runs and use those gaps to refine controller structure and gain selection.
System identification and logging are part of the workflow because repeatability matters more than single-run performance.
The work sharpened how I think about disturbance, measurement quality, and controller robustness on small robotic platforms.
It also reinforced a useful habit: isolate the variable, log the run, and treat the hardware as the real test of the control design.
Push broader scenarios and tighter convergence under disturbance while keeping the experiments comparable across runs.