Neo4j configuration
Neo4j is the Perforce IPLM database backend. Find the Neo4j configuration options.
Perforce IPLM-specific settings can be defined in the neo4j.conf
file. These settings should be identical for all the Neo4j servers in a cluster. The Neo4j server must be restarted for a change in the settings to take effect. If an illegal value is specified for a setting, the Neo4j server startup will be stopped and an error will be logged in the Neo4j log file.
Perforce IPLM settings
Setting | Type | Description | Default value |
---|---|---|---|
mdx.repo_path_validation_enabled | boolean |
If true, repo paths must not overlap across all the IPs Example: /workspaces/abc and /workspaces/abc/xyz cannot both exist |
true |
mdx.hook_script | string |
Absolute path to the server hook script The server hook script will be executed before some operations. It is useful to implement custom validation rules. Example: /methodicsiplm/bin/server-hook.sh |
n/a |
mdx.hook_execution_timeout | integer |
Server hook execution timeout (in ms, must be >= 0) If executing the server hook script takes longer than this time, it is aborted and the operation fails. |
5000 |
mdx.hook_lock_wait_timeout | integer |
Server hook lock wait timeout (in ms, must be >= 0) At most one instance of the server hook script is executed for any given object (Library, IP, etc.). |
3000 |
mdx.pagination_cli_page_size | integer |
Page size used by the CLI (0 means unlimited, must be >= 0) It is recommended to set this setting to a value greater than 1000. |
5000 |
mdx.pagination_cli_concurrent_requests | integer |
Number of concurrent requests for pages made by the CLI (must be > 0) It is recommended to set this setting to the number of slave servers in the Neo4j cluster. |
5 |
mdx.user_session_last_access_timestamp_tolerance | integer |
Tolerance period between updates of the user session last access timestamp (in ms, must be >= 0) In order to minimize lock contention and improve performance, the user session last access timestamp is updated only if it is older than this tolerance period. |
48 hours |
mdx.transaction_max_retries_entity_not_found | integer |
Maximum number of transaction retries when an entity has been deleted in a concurrent transaction (must be >= 0) When an object is deleted and listed simultaneously, the listing transaction can be retried. |
3 |
mdx.transaction_max_retries_other | integer |
Maximum number of transaction retries when another error occurs (must be >= 0) When a transaction fails because of an unexpected error, it can be retried. |
0 |
mdx.transaction_retries_delay | integer |
Delay before each transaction retry (in ms, must be >= 0) Setting this setting to a value greater than 0 increases the chance to successfully execute the transaction, at the cost of a possibly longer response time (due to the delay). |
0 |
mdx.janitor_initial_delay | integer |
Delay before the first execution of the janitor (in seconds, must be >= 0) This delay helps prevent the janitor from running during the initial startup of Neo4j. |
60 |
mdx.janitor_period | integer | Time between executions of the janitor (in seconds, must be > 0) | 24h |
mdx.file_list_compressor_threads | integer |
Number of threads used for background file list compression (must be >= 0) It is recommended to set this setting to the number of cores of the server. |
1 |
mdx.file_list_compressor_batch_size | integer |
Maximum number of uncompressed file lists scheduled for compression during one janitor execution (must be > 0) The janitor compresses file lists in batches. |
50000 |
Useful Neo4j settings
Setting | Type | Description | Default value |
---|---|---|---|
dbms.threads.worker_count | integer |
Number of Neo4j worker threads. Defines the size of the pool thread available to the database engine to execute queries. By default it is set to the number of cores the OS reports (so typically, 4-8 for low-end machines, 16-32 for beefier ones); up to 500, with the maximum custom value can be 40K+ . So that number is really a function of the CPU profile and workload. If the workload causes the DB engine to trigger queries that block; it makes sense to bump up the thread count so that work can be done by other threads. If the workload generates expensive queries that just take a long time to run and if your CPU is maxed out, then there is no point in bumping this up anymore and you just need to add another machine to the cluster or get a more powerful machine with more cores. Since those are threads, the memory accounting will happen towards the main java process, which should be auto-tuning; based on available memory in the system. |
number of cores |
Types of values
Type | Description |
---|---|
boolean |
One of:
|
integer | An integer number in decimal format |
string | A string |