Projects

SupervisorProject TitleAvailability
Parvin Mousavi Actionable AI in ICU 0 / 1 spots taken
Salimur Choudhury Automated Urban Infrastructure Report Generation from Drone Imagery Using Large Language Models 0 / 1 spots taken

Details

  • Actionable AI in ICU
    Supervisor: Parvin Mousavi
    Description:
    ICU environments are fast paced and high pressure and require decision making in the presence of high levels of uncertainty. In this project we will focus on designing actionable predictions using physiological signals collected continuously from ICU patients. This is a multi-disciplinary project in conjunction with Health Sciences and Business School at Queen's. Students are required to have a solid understanding of machine learning and experience with developing and applying deep learning approaches. Knowledge of signal processing is a plus.
  • Automated Urban Infrastructure Report Generation from Drone Imagery Using Large Language Models
    Supervisor: Salimur Choudhury
    Description:
    This project aims to develop an automated system that utilizes Large Language Models (LLMs) to process and interpret drone-captured urban imagery to generate detailed, human-readable reports. Automated report generation from drone imagery of urban areas can significantly streamline urban planning, infrastructure assessment, and decision-making processes.

    The following are the main objectives of the project:
    • To integrate computer vision models with LLMs to accurately detect, segment, and describe urban objects and scenes.

    • To develop a system that automates the generation of comprehensive, human-readable reports on urban infrastructure using drone-captured imagery and Large Language Models (LLMs).

    The following steps are involved in the development of the project:
    1. Image Acquisition: High-resolution drone imagery will be acquired through various sources.

    2. Object Detection and Segmentation: Computer vision models will be developed and applied to detect and segment key urban objects from the imagery. These models will identify and classify buildings, roads, parks, vehicles, and infrastructure components.

    3. Fine-Tuning LLMs: Large Language Models will be fine-tuned using datasets related to urban infrastructure. This will enable the models to understand and describe urban scenes and objects better.

    4. Caption Generation: The fine-tuned LLMs will generate descriptive captions for each detected object and scene. These captions will include information about object attributes (e.g., size, condition), conditions (e.g., damage, wear), and spatial relationships (e.g., proximity to other objects, location within the scene).

    5. Quantitative Measurements: Implement quantitative measures such as the Pavement Quality Index (PQI) and other relevant metrics to assess the condition and quality of infrastructure elements. These measurements will be integrated into the report generation process for a comprehensive analysis.

    6. Report Compilation: The generated captions, object information, and geospatial data will be combined to create comprehensive reports. These reports will provide a detailed analysis of the urban environment, highlighting key observations and insights.

    Focus and Scope:
    To limit the project's scope, the focus will initially be on fewer infrastructure elements that can be expanded upon in future iterations.

    Potential Applications:
    Urban Planning, Infrastructure Assessment, Decision making for urban development and management

 

Comments are closed.