The timm leaderboard
timm/leaderboard
has been updated with the ability to select different hardware benchmark sets: RTX4090, RTX3090, two different CPUs along with some NCHW / NHWC layout and torch.compile (dynamo) variations.
Also worth pointing out, there are three rather newish 'test' models that you'll see at the top of any samples/sec comparison:
* test_vit (
timm/test_vit.r160_in1k
)
* test_efficientnet (
timm/test_efficientnet.r160_in1k
)
* test_byobnet (
timm/test_byobnet.r160_in1k
, a mix of resnet, darknet, effnet/regnet like blocks)
They are < 0.5M params, insanely fast and originally intended for unit testing w/ real weights. They have awful ImageNet top-1, it's rare to have anyone bother to train a model this small on ImageNet (the classifier is roughly 30-70% of the param count!). However, they are FAST on very limited hadware and you can fine-tune them well on small data. Could be the model you're looking for?
timm/leaderboard
has been updated with the ability to select different hardware benchmark sets: RTX4090, RTX3090, two different CPUs along with some NCHW / NHWC layout and torch.compile (dynamo) variations.
Also worth pointing out, there are three rather newish 'test' models that you'll see at the top of any samples/sec comparison:
* test_vit (
timm/test_vit.r160_in1k
)
* test_efficientnet (
timm/test_efficientnet.r160_in1k
)
* test_byobnet (
timm/test_byobnet.r160_in1k
, a mix of resnet, darknet, effnet/regnet like blocks)
They are < 0.5M params, insanely fast and originally intended for unit testing w/ real weights. They have awful ImageNet top-1, it's rare to have anyone bother to train a model this small on ImageNet (the classifier is roughly 30-70% of the param count!). However, they are FAST on very limited hadware and you can fine-tune them well on small data. Could be the model you're looking for?