I'd take a look at the open source project https://github.com/marqo-ai/marqo instead. It does 1 and 2 out of the box. You can use CLIP or any model you want really.
Here are the types of images your users might be searching for:
Examples of a class of objects - say "Lion" - your best friend is CLIP, trained on captions and naturally weighted towards popular images.
Conceptual ideas, abstract levels - 'Beautiful painting' - CLIP is ok here, but it's not really optimised for that and most 'beautiful paintings' are not labelled as such, but rather something like 'Glowing fields of joy'. Fear not, Conceptual-CLIP has your back! It can nab these.
Diagrams/slides/technical/things with text - Your SOL beyond word match, tbh. If your image doesn't have words, maybe OCR it?
Pictures of specific things like people or logos - to "find similar" you'll want to embed old style image features like corners, SIFT features etc. Note that an iceberg with a clear sky is identical to a desert with a clear sky to this approach.
1. Get CLIP embeddings for text & images 2. Put them in a vector database (Pinecone.io or something similar)
It's unreasonably effective. Checkout this search engine: https://same.energy/