Active Exploration in Partially Observable Environment

Active Exploration in Partially Observable Environment

Objective: Consider a scenario where a search-and-rescue mission is underway, and rescue personnel need to scan across hundreds of potential regions from a helicopter to locate a missing person. A crucial strategy in such operations involves using UAVs to capture aerial imagery that can help identify a target of interest (e.g., the missing person). However, constraints like a limited field of view, high acquisition costs, time constraints, and restricted bandwidth between the sensor and the processing unit can make the search extremely challenging, demanding strategic decisions on where to query next based on the observations gathered so far. In many real-world situations, e.g. search-and-rescue operations, an entire image of the search space may not be available upfront. For example, an autonomous drone on a rescue mission might only be able to capture partial glimpses through a series of narrow observations due to constraints like a confined viewing range and high data collection costs. In these scenarios, the agent has to make decisions with incomplete information. This project aims to develop scalable methods by leveraging Generative Modeling techniques to enhance decision-making under partial observability.

Related Publications:

  1. GOMAA-Geo: GOal Modality Agnostic Active Geo-localization (NeurIPS 2024)

  2. Active Search in Partially Observable Environments. (Coming soon)

Representative figures from the papers.