Grantee Research Project Results
Final Report: LAMA: Localization and Mapping Artificial Intelligence Application
EPA Contract Number: 68HERC24C0018Title: LAMA: Localization and Mapping Artificial Intelligence Application
Investigators: Batchko, Robert
Small Business: Holochip Corporation
EPA Contact: Richards, April
Phase: I
Project Period: December 1, 2023 through May 30, 2024
Project Amount: $100,000
RFA: Small Business Innovation Research (SBIR) - Phase I (2024) RFA Text | Recipients Lists
Research Category: Small Business Innovation Research (SBIR)
Description:
Holochip’s Localization and Mapping Artificial Intelligence Application (LAMA) is a robust indoor mapping and localization solution for use in disaster areas. A key objective of LAMA is the ability to generate large, real-time, and shared 3D maps of indoor sites without relying on GPS, WiFi access points, or Bluetooth beacons, ensuring reliable, fault- and dropout-tolerant performance in challenging and data-deprived conditions often encountered during disaster scenarios.
Phase I effort included advancements made in AR/mapping software development. VisiSLAM was implemented, which allows users to view a comprehensive and detailed representation of the scanned environment. Capabilities for creating, saving, and loading world maps, markers, and routes were developed, greatly enhancing LAMA’s application's usability and functionality. Improvements were made in surface- and object-detection algorithms, facilitating more accurate placement and interaction with virtual objects through ray-casting techniques.
Two unique LAMA user interfaces (UI’s) were created: one for on-site coordinators (the “OSC app”) and one for responders (the “responder app”). Emphasis was placed on simplicity and efficiency, resulting in high-fidelity prototypes developed in Figma and transitioned to IMGUI. UI features, such as note placement and a crosshair marking system, were integrated to enhance user experience (UX) and facilitate easy navigation and interaction with the application. Since EPA responders might don personal protective equipment (PPE) such as gloves, the responder app UI was designed to be particularly minimalistic, having only four touch buttons, thereby reducing the risk of user error.
Phase I prototype hardware included the Apple 11-inch iPad Pro (4th Generation), chosen to run the Responder app due to it having a LiDAR sensor. The OSC app was designed to run on any Windows or Linux computer.
Enhancements in parallel processing were implemented, along with the resolution of crash bugs and race conditions. The application was updated to align with the latest Vulkan standard and migrated to RealityKit, improving performance and compatibility with LiDAR data. Additionally, real-time map transmission capabilities were implemented, an essential feature for future scalability and enhanced functionality.
Summary/Accomplishments (Outputs/Outcomes):
The research conducted during the Phase I effort yielded significant findings and demonstrated the feasibility and effectiveness of the LAMA application. Key results included the following: a) implementation of point cloud visualization; b) enabling detailed and accurate representations of scanned environments; c) creating and sharing large maps in real-time; d) saving and loading world maps directly on the devices; e) placing 3D markers in the maps; f) instantly establishing communication between the responder’s device (the iPad) and the server (i.e., the OSC computer); g) toggling between camera and map view in the UIs; and h) A* pathfinding. These and additional key results enhance LAMA's functionality. The UI/UX for OSC and responder apps were developed and refined, resulting in simple, effective, and safe UIs, enhancing LAMA's usability and effectiveness in real-world scenarios.
Feasibility of the LAMA Phase I prototype was demonstrated with the successful achievement of many challenging technical milestones. The AR/mapping software development achieved the successful implementation of pointcloud visualization, allowing users to view a detailed representation of the scanned environment. The ability to save and load maps on the devices has been achieved, enhancing functionality and user experience. Improvements in surface detection algorithms have enabled more accurate placement and interaction with virtual objects through raycasting. Virtual marker and note interaction were successfully implemented, allowing for the placement and interaction with 3D markers and notes. The integration of ARKit into ORBSlam3 facilitated the capture of feature points and world information to build comprehensive maps. Other mapping improvements included marker placement in BVH maps, real-time map rendering using Vulkan, and the ability to render geometry or feature points based on availability.
The OSC and responder apps were developed and refined, focusing on simplicity and efficiency. High-fidelity prototypes were developed in Figma and transitioned to IMGUI. A crosshair marking system was introduced to assist users in identifying points of interest.
Advancements included optimizing threading and parallelism, and fixing crash bugs and race conditions. The application was updated to the latest Vulkan standards and migrated to RealityKit for improved performance and compatibility with LiDAR data. Real-time map transmission capabilities were also laid out, crucial for future scalability and functionality.
Conclusions:
The Phase I LAMA effort has successfully met its objectives and proved the feasibility of our approach. The Phase I LAMA prototype demonstrated its feasibility as well as significant potential for future development and commercialization. The integration of advanced AR and mapping technologies proved the feasibility of creating a robust application for environmental monitoring and mapping. The features listed above were validated through extensive testing and demonstrations. The focus on UI/UX design has resulted in a user-friendly interface that meets the specific needs of the OSCs and responders.
The project has established a solid foundation for future development, with planned enhancements including real-time map transmission, further integration with ARKit, and support for Android devices. Continued focus on performance optimization and feature expansion will ensure the application remains at the forefront of AR and mapping technology.
Participation in the Technical and Business Assistance (TABA) program provided support in identifying potential industry partners and end-users. Participation in the I-Corps program at George Washington University focused on intensive customer discovery.
The Phase I prototype LAMA application has demonstrated significant potential for commercial use across various industries including environmental monitoring, construction, retail, and healthcare, and LAMA is particularly beneficial for emergency response scenarios, where reliable and precise indoor mapping is crucial. A solid foundation was established for future development and commercialization, with plans to expand the application's capabilities and market reach in Phase II. These efforts will focus on further technical enhancements, market validation, and strategic partnerships to drive the successful commercialization of the LAMA application.
SBIR Phase II:
LAMA: Localization and Mapping Artificial Intelligence ApplicationThe perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Conclusions drawn by the principal investigators have not been reviewed by the Agency.