Le Lézard
Classified in: Science and technology
Subjects: Product/Service, Award

Inspur Comes Out on Top with Superior AI Performance in MLPerf Inference V1.1


Recently, MLCommonstm, a well-known open engineering consortium, released the results of MLPerftm Inference V1.1, the leading AI benchmark suite. In the very competitive Closed Division, Inspur ranked first in 15 out of 30 tasks, making it the most successful vendor at the event.

Inspur Results in MLPerfTM Inference V1.1

Vendor

Division

System

Model

Accuracy

Score

Units

Inspur

Data Center
Closed

NF5688M6

3D-UNet

Offline, 99%

498.03

Samples/s

NF5688M6

3D-UNet

Offline, 99.9%

498.03

Samples/s

NF5488A5

DLRM

Offline, 99%

2607910

Samples/s

NF5688M6

DLRM

Server, 99%

2608410

Queries/s

NF5488A5

DLRM

Offline, 99.9%

2607910

Samples/s

NF5688M6

DLRM

Server, 99.9%

2608410

Queries/s

Edge
Closed

NE5260M5

3D-UNet

Offline, 99%

93.49

Samples/s

NE5260M5

3D-UNet

Offline, 99.9%

93.49

Samples/s

NE5260M5

Bert

Offline, 99%

5914.13

Samples/s

NF5688M6

Bert

SingleStream, 99%

1.54

Latency (ms)

NF5688M6

ResNet50

SingleStream, 99%

0.43

Latency (ms)

NE5260M5

RNNT

Offline, 99%

24446.9

Samples/s

NF5688M6

RNNT

SingleStream, 99%

18.5

Latency (ms)

NF5688M6

SSD-ResNet34

SingleStream, 99%

1.67

Latency (ms)

NF5488A5

SSD-MobileNet

SingleStream, 99%

0.25

Latency (ms)

Developed by Turing Award winner David Patterson and leading academic institutions, MLPerftm is the leading industry benchmark for AI performance. Founded in 2020 and based on MLPerftm benchmarks, MLCommons is an open non-profit engineering consortium dedicated to advancing standards and metrics for machine learning and AI performance. Inspur is a founding member of MLCommonstm, along with over 50 other leading organizations and companies from across the AI landscape.

In the MLPerftm Inference V1.1 benchmark test, the Closed Division included two categories ? Data Center (16 tasks) and Edge (14 tasks). Under the Data Center category, six models were covered, including Image Classification (ResNet50), Medical Image Segmentation (3D-UNet), Object Detection (SSD-ResNet34), Speech Recognition (RNN-T), Natural Language Processing (BERT), and Recommendation (DLRM). A high accuracy mode (99.9%) was set for BERT, DLRM and 3D-UNET. Every model task evaluated the performance in both Server and Offline scenarios with the exception 3D-UNET, which was only evaluated in the Offline scenario. For the Edge category, the Recommendation (DLRM) model was removed and the Object Detection (SSD-MobileNet) model was added. A high accuracy mode (99.9%) was set for 3D-UNET. All models were tested for both Offline and Single Stream inference.

In the extremely competitive Closed Division, in which mainstream vendors were competing, the use of the same models and optimizers was required by all participants. Doing so provided the ability to easily evaluate and compare AI computing system performance among various vendors. Nineteen vendors including Nvidia, Intel, Inspur, Qualcomm, Alibaba, Dell, and HPE participated in the Closed Division. A total of 1,130 results were submitted, including 710 for the Data Center category, and 420 for the Edge category.

Full-Stack AI Capabilities Ramp up Performance

Inspur achieved excellent results in this MLPerftm competition with its three AI servers ? NF5488A5, NF5688M6, and NE5260M5.

Inspur ranked first in 15 tasks covering all AI models, including Medical Image Recognition, Natural Language Processing, Image Classification, Speech Recognition, Recommendation, as well as Object Detection (SSD-ResNet34 and SSD-MobileNet). The results showcase that from Cloud to Edge, Inspur is ahead of the Industry in nearly all aspects. Inspur was able to make huge strides in performance in various tasks under the Data Center category compared to previous MLPerf events despite no changes to its server configuration. Its model performance results in Image Classification (ResNet50) and Speech Recognition (RNN-T) increased by 4.75% and 3.83% compared to the V1.0 competition just six months ago.

The outstanding performance of Inspur's AI servers in the MLPerftm Benchmark Test can be credited to Inspur's exceptional system design and full-stack optimization in AI computing systems. Through precise calibration and optimization, CPU and GPU performance as well as the data communication between CPUs and GPUs were able to reach the highest levels for AI inference. Additionally, by enhancing the round-robin scheduling for multiple GPUs based on GPU topology, the performance of a single GPU or multiple GPUs can be increased nearly linearly.

Inspur NF5488A5 was the only AI server in this MLPerftm competition to support eight 500W A100 GPUs with liquid cooling technology, which significantly boosted AI computing performance. Among mainstream high-end AI servers with 8 NVIDIA A100 SXM4 GPUs, Inspur's servers came out on top in all 16 tasks in the Closed Division under the Data Center category.

As a leading AI computing company, Inspur is committed to the R&D and innovation of AI computing, including both resource-based and algorithm platforms. It also works with other leading AI enterprises to promote the industrialization of AI and the development of AI-driven industries through its "Meta-Brain" technology ecosystem.

To view the complete results of MLPerftm Inference v1.1, please visit:
https://mlcommons.org/en/inference-datacenter-11/
https://mlcommons.org/en/inference-edge-11/

About Inspur Information

Inspur Information is a leading provider of data center infrastructure, cloud computing, and AI solutions. It is the world's 2nd largest server manufacturer. Through engineering and innovation, Inspur delivers cutting-edge computing hardware design and extensive product offerings to address important technology arenas like open computing, cloud data center, AI, and deep learning. Performance-optimized and purpose-built, our world-class solutions empower customers to tackle specific workloads and real-world challenges. To learn more, please go to https://www.inspursystems.com/.


These press releases may also interest you

at 11:18
Bloxcross, a premier payment solutions provider, and GoDirectPay, an e-payment expert, have collaborated to launch a groundbreaking mobile application that revolutionizes global payments. https://directpay.blox.global/register/ This custom-built app...

at 11:14
Vandis, Inc, today announced that it has become an Expert-level partner within Fortinet's Engage Partner Program. This achievement demonstrates Vandis' ongoing efforts to improve end user networks by expanding product knowledge in one of the...

at 11:14
Stream.Security, a leading platform for cloud solutions specializing in real-time digital twin technology, announced today the release of new advanced threat investigation and AI-powered remediation capabilities. The new real-time attack path...

at 11:08
"Operation Medic Bag" is a focused effort to respond to community expectations and improve PAYDAY 3. The initiative covers a range of sought-after features and improvements to gameplay, stability, matchmaking, content and enhanced features. The...

at 11:08
Sunwave Health, a leader in healthcare technology, proudly announces the launch of MARA, a patent-pending, innovative AI Agent designed to transform the landscape of behavioral health documentation. Since its launch on October 25, 2023, MARA has...

at 11:06
As...



News published on and distributed by: