mirror of https://github.com/tensorflow/models.git
|
…
|
||
|---|---|---|
| .. | ||
| README.md | ||
README.md
TensorFlow Community Models
This repository provides a curated list of the GitHub repositories with machine learning models and implementations powered by TensorFlow 2.
Note: Contributing companies or individuals are responsible for maintaining their repositories.
Computer Vision
Image Recognition
Object Detection
| Model | Paper | Features | Maintainer |
|---|---|---|---|
| R-FCN | R-FCN: Object Detection via Region-based Fully Convolutional Networks |
• Int8 Inference • FP32 Inference |
Intel |
| SSD-MobileNet | MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications |
• Int8 Inference • FP32 Inference |
Intel |
| SSD-ResNet34 | SSD: Single Shot MultiBox Detector | • Int8 Inference • FP32 Inference • FP32 Training |
Intel |
Segmentation
| Model | Paper | Features | Maintainer |
|---|---|---|---|
| Mask R-CNN | Mask R-CNN | • Automatic Mixed Precision • Multi-GPU training support with Horovod • TensorRT |
NVIDIA |
| U-Net Medical Image Segmentation | U-Net: Convolutional Networks for Biomedical Image Segmentation | • Automatic Mixed Precision • Multi-GPU training support with Horovod • TensorRT |
NVIDIA |
Natural Language Processing
| Model | Paper | Features | Maintainer |
|---|---|---|---|
| BERT | BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding |
• FP32 Inference • FP32 Training |
Intel |
| BERT | BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding | • Horovod Multi-GPU • Multi-node with Horovod and Pyxis/Enroot Slurm cluster • XLA • Automatic mixed precision • LAMB |
NVIDIA |
| ELECTRA | ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators | • Automatic Mixed Precision • Multi-GPU training support with Horovod • Multi-node training on a Pyxis/Enroot Slurm cluster |
NVIDIA |
| GNMT | Google’s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation |
• FP32 Inference | Intel |
| Transformer-LT (Official) | Attention Is All You Need | • FP32 Inference | Intel |
| Transformer-LT (MLPerf) | Attention Is All You Need | • FP32 Training | Intel |
Recommendation Systems
| Model | Paper | Features | Maintainer |
|---|---|---|---|
| Wide & Deep | Wide & Deep Learning for Recommender Systems | • FP32 Inference • FP32 Training |
Intel |
| Wide & Deep | Wide & Deep Learning for Recommender Systems | • Automatic mixed precision • Multi-GPU training support with Horovod • XLA |
NVIDIA |
| DLRM | Deep Learning Recommendation Model for Personalization and Recommendation Systems | • Automatic Mixed Precision • Hybrid-parallel multiGPU training using Horovod all2all • Multinode training for Pyxis/Enroot Slurm clusters • XLA • Criteo dataset preprocessing with Spark on GPU |
NVIDIA |
Contributions
If you want to contribute, please review the contribution guidelines.
