Can AI auto-tag an entire image library from filenames, folder structure, and image analysis — making assets queryable by AI agents?
AI agents that generate landing pages, emails, or ads need to know what images are available and what they represent. But most asset libraries are just folders of files with cryptic names. This experiment connects to an S3/R2 bucket, scans every asset, and auto-generates metadata tags using filename patterns, folder structure, and AI image analysis. A file at `/products/blue-widget/hero.jpg` gets tagged `product`, `blue-widget`, `hero`. The result: an asset library that AI agents can query semantically.
From unstructured folder of files to a semantically queryable asset library.
Can AI auto-tag an entire image library from filenames, folder structure, and image analysis — making assets queryable by AI agents?
AI agents that generate landing pages, emails, or ads need to know what images are available and what they represent. But most asset libraries are just folders of files with cryptic names. This experiment connects to an S3/R2 bucket, scans every asset, and auto-generates metadata tags using filename patterns, folder structure, and AI image analysis. A file at /products/blue-widget/hero.jpg gets tagged product, blue-widget, hero. The result: an asset library that AI agents can query semantically.
From unstructured folder of files to a semantically queryable asset library.
Folder structure is the strongest signal — well-organised buckets produce better auto-tags than AI image analysis alone.
A simple tag taxonomy (type, product, variant, context) covers 90% of agent needs — over-engineering the schema adds complexity without value.
Periodic re-scanning for new assets is essential — the library needs to stay current without manual triggers.
Human review via a grid UI is fast enough to be practical (~5 minutes for 200 assets) — full automation isn’t necessary if the review step is lightweight.
This feeds directly into the self-optimising landing pages experiment. When the page agent can query a tagged asset library, it makes better visual choices. We’re testing whether the tagging system can also infer brand guidelines from visual patterns — not just categorise, but understand style.