About
OnView compares visual characteristics between artworks, revealing similarities across cultures and time.
How It Works
The OnView dataset contains over 2000 images of paintings from the Metropolitan Museum of Art, that are part of the museum's Open Access Initiative. Each image has been processed by OpenAI's CLIP model to generate embeddings, which are numerical representations of the images' features and patterns. These embeddings allow the images to be compared for visual similarity. When a user clicks the "Find Similar" button for a particular artwork, the embedding for that image is compared to the rest of the dataset to provide the most visually similar artworks in the search results.
Want to know more?
If you're interested in learning more about the project or its creator, I've written more extensively about the experience and the underlying technologies on my personal website.