home scroll AI Learning

Benchmarks

Python AIBenchmark

https://pypi.org/project/ai-benchmark/

Comparison Table
https://ai-benchmark.com/ranking_cpus_and_gpus_detailed.html

The proposed program
from ai_benchmark import AIBenchmark
benchmark = AIBenchmark()
results=benchmark.run()
crashes due to error
AttributeError: module 'numpy' has no attribute 'warnings'. Did you mean: 'hanning'?
workaround:
In source file
ai_benchmark/utils.py
change
import numpy as np
to
import warnings
import numpy as np
np.warnings = warnings
(Brave AI generated answer)

Note: If ai_benchmark was installed with pip while a conda environment named test02 was active, the file is in

~/miniconda3/envs/test02/lib/python3.10/site-packages/ai_benchmark

new-ai-benchmark
There is a revision of the package named new-ai-benchmark
that has the error fixed:

https://pypi.org/project/new-ai-benchmark/

https://snyk.io/advisor/python/new-ai-benchmark

ai-benchmark Code Structure

The program that uses ai-benchmark calls
AIBenchmark().run()

The class AIBenchmark and its run function are defined in core.py
class AIBenchmark:
..
def run(self, precision="normal", test_ids=None, training=True, inference=True, micro=False, cpu_cores=None, inter_threads=None, intra_threads=None):

The run function calls utils.run_tests which is defined in utils.py

Results

Laptop * TF Version: 2.15.0 * Platform: Linux-6.11.0-26-generic-x86_64-with-glibc2.39 * CPU: N/A * CPU RAM: 7 GB 2025-06-05 16:35:02.904317: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:388] MLIR V1 optimization pass is not enabled 1.1 - inference | batch=50, size=224x224: 286 ± 3 ms 1.2 - training | batch=50, size=224x224: 1141 ± 5 ms Device Inference Score: 2621 Device Training Score: 2321 Device AI Score: 4942 For more information and results, please visit http://ai-benchmark.com/alpha Kaggle 2 x T4 * TF Version: 2.18.0 * Platform: Linux-6.6.56+-x86_64-with-glibc2.35 * CPU: N/A * CPU RAM: 31 GB * GPU/0: Tesla T4 * GPU RAM: 13.6 GB * GPU/1: Tesla T4 * GPU RAM: 13.6 GB * CUDA Version: 12.5 * CUDA Build: V12.5.82 1/1. MobileNet-V2 I0000 00:00:1749160627.970120 35 gpu_device.cc:2022] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 13942 MB memory: -> device: 0, name: Tesla T4, pci bus id: 0000:00:04.0, compute capability: 7.5 I0000 00:00:1749160627.970429 35 gpu_device.cc:2022] Created device /job:localhost/replica:0/task:0/device:GPU:1 with 13942 MB memory: -> device: 1, name: Tesla T4, pci bus id: 0000:00:05.0, compute capability: 7.5 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR I0000 00:00:1749160629.337123 35 mlir_graph_optimization_pass.cc:401] MLIR V1 optimization pass is not enabled I0000 00:00:1749160630.452434 129 cuda_dnn.cc:529] Loaded cuDNN version 90300 1.1 - inference | batch=50, size=224x224: 74.6 ± 6.2 ms 1.2 - training | batch=50, size=224x224: 207 ± 5 ms Device Inference Score: 10051 Device Training Score: 12790 Device AI Score: 22841 For more information and results, please visit http://ai-benchmark.com/alpha AI server * TF Version: 2.15.0 * Platform: Linux-6.8.0-60-generic-x86_64-with-glibc2.39 * CPU: N/A * CPU RAM: 126 GB * GPU/0: NVIDIA GeForce RTX 3090 * GPU RAM: 21.9 GB * GPU/1: NVIDIA GeForce RTX 3090 * GPU RAM: 21.9 GB * CUDA Version: 12.6 * CUDA Build: V12.6.77 1/1. MobileNet-V2 2025-06-05 16:37:08.027947: I external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:454] Loaded cuDNN version 8907 1.1 - inference | batch=50, size=224x224: 51.7 ± 1.0 ms 1.2 - training | batch=50, size=224x224: 61.8 ± 2.9 ms Device Inference Score: 14502 Device Training Score: 42873 Device AI Score: 57375 For more information and results, please visit http://ai-benchmark.com/alpha
[{'prefix': '3.1 - inference', 'mean': 28.857142857142858, 'std': 1.8331787195162057}, {'prefix': '3.2 - training ', 'mean': 85.95238095238095, 'std': 1.0900498230723425}, {'prefix': '5.1 - inference', 'mean': 19.047619047619047, 'std': 0.7221786137191952}, {'prefix': '5.2 - training ', 'mean': 52.523809523809526, 'std': 0.7939681905015745}]

GPU enabled

https://pypi.org/project/ai-benchmark/
Note 2: For running the benchmark on Nvidia GPUs, NVIDIA CUDA and cuDNN libraries should be installed first. Please find detailed instructions here.
links to
https://www.tensorflow.org/install/gpu

Mandelbrot benchmark


Note: Brave AI search Brave for "python measure time benchmark"
import time

start_time = time.monotonic()

# Your code here

print('seconds: ', time.monotonic() - start_time)
Brave AI response mentions line_profiler
from line_profiler import LineProfiler

def my_function():
# Your code here

lp = LineProfiler()
lp_wrapper = lp(my_function)
lp_wrapper()

lp.print_stats()
memory_profiler

Follow Me

discord