NIAR’s New Special Exhibition “Seeing through Your Microtext: From Signboards to Scenarios—The Secret behind Speaking Symbols” Held at NLPI

When walking down the street, you will find signs and billboards are everywhere. But have you ever thought that they are more than advertising tools? In fact, they contain clues that AI can interpret. The National Center for High-performance Computing (NCHC), under the National Institutes of Applied Research (NIAR), the National Science and Technology Council (NSTC), collaborates with the National Library of Public Information (NLPI) to host a special exhibition titled “Seeing through Your Microtext: From Signboards to Scenarios—The Secret behind Speaking Symbols” at the “Secret Base of Scientists @ Taichung,” which is on the second floor of the library. Through interactive displays, the exhibition introduces visitors to AI-based image and text recognition technologies, demonstrating how critical information can be uncovered from images and applied to public safety and smart investigation, and creating new possibilities for life in the age of technology.
Learning AI from Street Views & Finding Answers in Images
The exhibition features an interactive display that simulates in-vehicle image recognition scenarios. Visitors can operate a device that shows a model car in a moving street scene, turning a steering wheel to change the view. When the camera “captures” a specific venue sign, the system instantly analyzes the image and manifests relevant venue information. This allows the public to intuitively understand how locations are pinpointed through the images and texts of street views.
Another section of the exhibition presents popular science knowledge about image-text recognition—Using AI to Decode Clothing and Color. By having it digest thousands of images, AI will learn to classify different parts of a person’s outfit, such as hats, jackets, and pants, and even identify colors like red, blue, or brown. This technology can be applied to virtual fitting and smart outfit recommendations, and can even be employed to assist the police in identifying clothing clues in images.
From Image-text Recognition to Public Safety Applications—Multimodal Recognition
The exhibition also features the NCHC-developed “Digital Twin Policing Investigation Platform.” This system integrates multimodal AI tools and large language models (LLMs) to assist the police in efficiently conducting text analysis, intelligence consolidation, and task assignment. From image interpretation and semantic analysis to report compilation, AI functions like a team of digital experts, significantly reducing case processing time and showcasing a new model of smart policing.
In the past, Optical Character Recognition (OCR) technology was mainly used for static document recognition. Today, however, the research team has developed an AI model that can dynamically recognize text in street scenes. It can handle the complex layout, colors, and obstructions commonly found on Taiwanese signs, achieving an impressive 95% accuracy rate. Combined with scene pattern recognition technology, the system can identify location information by analyzing contextual features such as buildings or road signs, mimicking the way humans interpret the space they are in. This makes powerful image-to-location search capabilities possible, offering a new solution to public safety applications.
Installing Smart Technology Systems & Achieving Remarkable Results in Policing Applications
This technology has already been deployed in real-world law enforcement. NCHC has collaborated with the Criminal Investigation Corps of the Taichung City Police Department to establish the “Digital Twin Policing Investigation Platform: TCPB Intelligence Assistant,” which has accumulated over one million image data entries. As the public frequently uploads videos of street conflicts online, the system can automatically analyze locations whenever signs or street views appear in the footage, helping the police quickly pinpoint the scene, retrieve surveillance footage, and even cross-check wanted criminals, significantly reducing investigation time. Even when there is no visible text seen in the footage, the new AI model can analyze background features and identify possible locations through image matching, replacing the previously complicated manual comparison process and greatly enhancing efficiency. The system has been successfully applied in multiple social incident investigations.
Finally, a large LEGO-style photo zone is set up in the exhibition area, paired with humorous, detective-style phrases that invite visitors to engage light-heartedly while experiencing how AI “reads” our everyday surroundings. Don’t underestimate a simple street view or a line of signboard text. You thought there is nothing at all, but AI sees it all. Welcome to NLPI to unravel the mystery of intelligence hidden within images and texts!
Exhibition Information
Title: “Seeing through Your Microtext: From Signboards to Scenarios—The Secret behind Speaking Symbols”
Date: July 22, 2025 (Tuesday) to October 30, 2025 (Thursday)
Venue: Mini Exhibition Area (2F), National Library of Public Information

