How were the performance and capacity planning metrics determined?

Created by Steve Place, Modified on Wed, Jun 7, 2023 at 2:12 PM by Steve Place

Please read this docs page as background for this article, as well as for the most up to date version of the below table.

Users occasionally ask us how the numbers in our capacity planning table were generated:

Number of Triples

JVM Heap Memory
Direct MemoryTotal System Memory
100 million3G
1 billion8G20G32G
10 billion30G80G128G
25 billion60G160G256G
50 billion80G380G512G

These recommendations are based on a variety of workloads we run over different benchmarks, including but not limited to BSBMLDBCand LUBM. The workloads test reads and writes, including concurrent user access. Giving Stardog more memory almost always improves performance, so allocating more memory than specified in this table is generally a good idea. Memory requirements vary based on the dataset and the query complexity, so we recommend testing with your own workload.

Was this article helpful?

That’s Great!

Thank you for your feedback

Sorry! We couldn't be helpful

Thank you for your feedback

Let us know how can we improve this article!

Select at least one of the reasons
CAPTCHA verification is required.

Feedback sent

We appreciate your effort and will try to fix the article