BAAI/bge-m3 is an interesting model, that is multilingual and with a context size of 8192. Even with a 16x larger context, it's only 4x slower to compute it's embeddings on the worst case scenario. Also includes a minor refactor of the rake task, including setting model and concurrency levels when running the backfill task. |
||
|---|---|---|
| .. | ||
| Apache License | ||
| MIT License | ||
| README.md | ||
| all-mpnet-base-v2.json | ||
| bert-base-uncased.json | ||
| bge-large-en.json | ||
| bge-m3.json | ||
| claude-v1-tokenization.json | ||
| llama-2-70b-chat-hf.json | ||
| mixtral.json | ||
| multilingual-e5-large.json | ||
README.md
bert-base-uncased.json
Licensed under Apache License
claude-v1-tokenization.json
Licensed under MIT License
all-mpnet-base-v2.json
Licensed under Apache License
llama-2-70b-chat-hf
Licensed under LLAMA 2 COMMUNITY LICENSE AGREEMENT
multilingual-e5-large
Licensed under MIT License
bge-large-en
Licensed under MIT License
mixtral
Licensed under Apache 2.0 License
bge-m3
Licensed under MIT License