Skip to content

Performance

sszuev edited this page Nov 26, 2020 · 5 revisions
LLS Benchmark

Here is a simple load-list-save (LLS) benchmark, which demonstrates that the whole-cycle performance is slightly better for ONT-API in comparision with OWL-API-impl:

Benchmark                                             (data)   Mode  Cnt        Score        Error    Units
LoadListSaveBenchmark.ONT_withoutTransformations       PIZZA  thrpt   25     3251,526 ±     30,813  ops/min
LoadListSaveBenchmark.ONT_withTransformations          PIZZA  thrpt   25     2679,207 ±      8,812  ops/min
LoadListSaveBenchmark.OWL                              PIZZA  thrpt   25     2009,406 ±     14,837  ops/min
LoadListSaveBenchmark.ONT_withoutTransformations      FAMILY  thrpt   25     1295,657 ±      4,855  ops/min
LoadListSaveBenchmark.ONT_withTransformations         FAMILY  thrpt   25     1122,983 ±      5,667  ops/min
LoadListSaveBenchmark.OWL                             FAMILY  thrpt   25      907,334 ±      5,277  ops/min
LoadListSaveBenchmark.ONT_withoutTransformations       GALEN  thrpt   25       15,944 ±      0,253  ops/min
LoadListSaveBenchmark.ONT_withTransformations          GALEN  thrpt   25       13,417 ±      0,147  ops/min
LoadListSaveBenchmark.OWL                              GALEN  thrpt   25       11,945 ±      0,649  ops/min
LoadListSaveBenchmark.ONT_withoutTransformations          PS  thrpt   25      140,326 ±      1,013  ops/min
LoadListSaveBenchmark.ONT_withTransformations             PS  thrpt   25      128,166 ±      1,456  ops/min
LoadListSaveBenchmark.OWL                                 PS  thrpt   25       64,897 ±      8,637  ops/min
LoadListSaveBenchmark.ONT_withoutTransformations          HP  thrpt   25       13,178 ±      0,112  ops/min
LoadListSaveBenchmark.ONT_withTransformations             HP  thrpt   25       11,868 ±      0,101  ops/min
LoadListSaveBenchmark.OWL                                 HP  thrpt   25       10,167 ±      0,065  ops/min
LoadListSaveBenchmark.ONT_withoutTransformations         TTO  thrpt   25        7,871 ±      0,094  ops/min
LoadListSaveBenchmark.ONT_withTransformations            TTO  thrpt   25        7,355 ±      0,065  ops/min
LoadListSaveBenchmark.OWL                                TTO  thrpt   25        5,225 ±      0,098  ops/min
LoadListSaveBenchmark.ONT_withoutTransformations          GO  thrpt   25        2,777 ±      0,034  ops/min
LoadListSaveBenchmark.ONT_withTransformations             GO  thrpt   25        2,573 ±      0,020  ops/min
LoadListSaveBenchmark.OWL                                 GO  thrpt   25        2,241 ±      0,015  ops/min
LA Benchmark

Here is a simple list axioms (LA) benchmark:

Benchmark                                             (data)   Mode  Cnt        Score        Error    Units
ListAxiomsBenchmark.listAxioms                     PIZZA_ONT  thrpt   25  8336286,073 ± 147882,572  ops/min
ListAxiomsBenchmark.listAxioms                     PIZZA_OWL  thrpt   25  6448910,402 ±  42874,509  ops/min
ListAxiomsBenchmark.listAxioms                    FAMILY_ONT  thrpt   25  3256954,308 ±  35773,056  ops/min
ListAxiomsBenchmark.listAxioms                    FAMILY_OWL  thrpt   25  2388802,430 ±  22562,562  ops/min
ListAxiomsBenchmark.listAxioms                     GALEN_ONT  thrpt   25    47874,637 ±   8287,058  ops/min
ListAxiomsBenchmark.listAxioms                     GALEN_OWL  thrpt   25    23266,369 ±   1131,487  ops/min
ListAxiomsBenchmark.listAxioms                        PS_ONT  thrpt   25   268577,313 ±   5705,086  ops/min
ListAxiomsBenchmark.listAxioms                        PS_OWL  thrpt   25   103997,937 ±   1248,130  ops/min
ListAxiomsBenchmark.listAxioms                        HP_ONT  thrpt   25    33322,333 ±   5090,444  ops/min
ListAxiomsBenchmark.listAxioms                        HP_OWL  thrpt   25    11961,544 ±    461,621  ops/min
ListAxiomsBenchmark.listAxioms                       TTO_ONT  thrpt   25    13356,822 ±   1523,849  ops/min
ListAxiomsBenchmark.listAxioms                       TTO_OWL  thrpt   25     5086,380 ±     70,993  ops/min
ListAxiomsBenchmark.listAxioms                        GO_ONT  thrpt   25     7802,732 ±    419,455  ops/min
ListAxiomsBenchmark.listAxioms                        GO_OWL  thrpt   25     3091,019 ±    134,860  ops/min
The used configuration:
  • ONT-API:2.0.0 (with jena-arq 3.13.1), OWL-API:5.1.11 (owlapi-impl, owlapi-apibinding, owlapi-rio)
  • JMH version: 1.21
  • VM version: JDK 1.8.0_201, Java HotSpot(TM) 64-Bit Server VM, 25.201-b09
  • Benchmark mode: Throughput, ops/time
  • Warmup: 5 iterations, 10 s each
  • Measurement: 5 iterations, 10 s each
  • Timeout: 10 min per iteration
  • Threads: 1 thread, will synchronize iterations
  • The full output in a single file (and here is for previous release)
  • The working benchmark code is available in this wiki itself. Just do git clone for the wiki.
  • Used ontologies:
name triples axioms size
PIZZA 1937 945 107.40KB
FAMILY 5017 2845 207.99KB
PS 38873 38872 4.04MB
GALEN 281492 96463 32.91MB
HP 367315 143855 37.15MB
TTO 650339 336291 58.97MB
GO 1436660 556475 85.1MB
Notes, explanations and thoughts:
  • TURTLE was taken as the input and output formats for all tests. Read/write operations are faster when using ONT-API, because the underlying Jena RIOT works faster then OWL-API parsers do. This is due to the nature of these APIs: the original OWL-API (i.e. its default implementation and parsers/storers) is OWL centric, while Jena is RDF centric. Since all data is actually RDF, and OWL is just sub-language on the top of RDF, it is not surprising that Jena TTL reader works better. More about this see in the description of the method com.github.owlcs.ontapi.config.LoadSettings#isUseOWLParsersToLoad()

  • The LLS benchmark statistic above is taken both with enabled (default) and disabled graph-transformations, while the ontologies used do not require any tuning, since they are already good OWL2 ontologies.
    If the graph transformation option is turned off the benchmarks unequivocally shows that ONT-API is faster in all cases.

  • The first iterating over axioms expected to be faster when using OWL-API, since all objects there are already stored in memory in the same form as they are required. While in ONT-API there is only RDF graph, and to retrieve some axiom it is need to read it from a graph, and then, possibly, cache it. But the LA benchmark statistic above show the opposite, since a collecting a cache is performed in warmup iteration. Also, OWL-API impl currently (v5.1.11) uses HPPC, it seems, in order to reduce memory consumption, while ONT-API uses the standard Array to provide fast iterations, not taking care much about memory.

  • How to deal with big ontologies? By default, the ONT-API consumes less memory then the default OWL-API impl, but still a lot, see Memory page. But it is always possible to put Graph into Apache Jena TBD or TBD2. Also, in ONT-API it is possible to disable the whole cache (com.github.owlcs.ontapi.OntManagers#createDirectManager()) or just part of it (see com.github.owlcs.ontapi.config.CacheSettings).

  • The good LLS and LA statistics demonstrate that the RDF-centric approach is not only more convenient, flexible and extendable. It may also be fast. Making all the rest operations faster is a main leitmotif of work on the subsequent releases.

  • JMH recalls: the numbers above are just data. Do not assume the numbers tell you what you want them to tell.

Clone this wiki locally