Advanced Computing in the Age of AI | Tuesday, April 23, 2024

Nvidia Touts Strong Results on Financial Services Inference Benchmark 

The next-gen Hopper family may be on its way, but that isn’t stopping Nvidia’s popular A100 GPU from leading another benchmark on its way out. This time, it’s the STAC-ML inference benchmark, produced by the Securities Technology Analysis Center (STAC) and aimed at evaluating machine learning (ML) for key workloads in the financial sector. Specifically, the benchmark evaluates the time from new data input until new model output on the long short-term memory (LSTM) models that are applied to time-series financial data.

STAC-ML consists of three LSTM models of varying complexity. Nvidia benchmarked using a Supermicro Ultra SuperServer equipped with A100 GPUs. Nvidia says that it achieved “low latencies in the 99th percentile” on the benchmark with little variation according to model complexity.

“Notably, there were no large outliers in Nvidia's latency, as the maximum latency was no more than 2.3× the median latency across all LSTMs and the number of model instances, ranging up to 32 concurrent instances,” wrote Malcolm deMayo, global vice president for the financial services industry at Nvidia, in a blog post this week.

In addition to the extraordinarily low latencies, Nvidia also cited “leading throughput”: on the least complex of the three models, the hardware delivered in excess of 1.7 million inferences per second; on the most complex, up to 12,800 per second. Further, deMayo wrote, Nvidia was the first to submit its results for STAC’s “Tacana Suite,” which evaluates performance on sliding time-series analyses that are useful for high-frequency trading analysis. Finally, Nvidia says that it achieved “record-setting performance” on two additional STAC benchmarks: STAC-A2, which is aimed at option price discovery, and STAC-A3, which is aimed at model backtesting.

deMayo wrote that the benchmark results show that “A100 GPUs deliver leading performance and workload versatility for financial institutions.” He added that the STAC benchmark results “are closely followed by financial institutions, three-quarters of which rely on machine learning, deep learning or high performance computing,” citing an Nvidia-led survey that was also published this week. (To learn more about that survey, click here.)

Nvidia also took the opportunity to highlight its energy efficiency – a growing trend among HPC suppliers.

“On the most demanding LSTM model, Nvidia A100 exceeded 17,700 inferences per second per kilowatt while consuming 722 watts, offering leading energy efficiency,” deMayo wrote, emphasizing how energy efficiency can ameliorate concerns over both energy operating expenses and necessary square footage (which might be a fairly pressing concern for, say, Wall Street-based financial institutions).

EnterpriseAI