picache.conf configuration file

This page outlines the settings available in the IPLM Server configuration file /etc/mdx/picache.conf.

Overview

The main IPLM Cache configuration file is located by default (for package based installs) at /etc/mdx/picache.conf.

The IPLM Cache server must be restarted to apply changes that are made to the configuration file.

Main section

Setting Format Default Description
debug-enable
yes/no no Set to yes to enable debug logging.
root

unix path

No default

IPLM Cacheroot directory.

This is the top level directory that will contain the IPVs populated into IPLM Cache.

It is normally hosted on a network drive accessible to all workspaces.

Command line option: --port=<port>

static-mount
True/False True

Indicates if the root directory is a static mount (True) or if it is automounted (False). Can be overridden in a project's section.

use-picache-managed-file
True/False False

If static-mount is set to False, use-picache-managed-file indicates if IPLM Cache just checks for the existence of the automounted root directory (False), or if the file '.picache-managed' within the root directory is checked for (True). Can be overridden in a project's section.

IPLM Cache's checking for the existence of the root directory or the root directory's .picache-managed file should cause the automounted directory to become mounted if it is not already.

use-unique-alias-symlink True/False False

If an IPV referenced by a unique alias is loaded into the cache and if use-unique-alias-symlink is set to True, a symbolic link is created in the destination directory at <root>/<library>/<ip>/+<unique-alias> that points to the source directory <root>/<library>/<ip>/<line>/<version>.

To remove the IPV from the cache, you can use the unique alias in the pi-admin picache remove <IPV> command for the <IPV>, which looks for the unique alias symlink in the destination directory and if found is removed, regardless of the use-unique-alias-symlink setting. You can also choose to not use the unique alias in the IPV removal and any symlink in the destination directory starting with a '+' and pointing to the symlink source directory is removed, regardless of the use-unique-alias-symlink setting.

site
string
local

IPLM Cache site.

Used for IPLM Cache identification and for mapping between Perforce commit/edge servers in a replication environment.

See Using Multiple Perforce Servers with Perforce IPLM for details.

Command line option: --site=<site>

port
integer
5000

IPLM Cache server listening port.

Command line option: --port=<port>

ipv-umask
octal number 027

IPV file umask.

IPV file mode creation mask. The umask applies to the IPV directory structures except the IPV directory itself which has ipv-dir-permission applied to it.

ipv-no-setaccess
True/False False Do not set IPV directory permissions of group ownership.
group-execute-only
True/False False

Indicates if the cache directory structure leading to an IP is to have group-execute-only permissions.

If set to True and if ipv-no-setaccess is False, sets directories from the root directory, but not including the root directory, to the directory containing the IP to a permission of 710.

This makes it so that the cache structure is opaque to different users, but users in the same group have access to the IPVs as long as they know which IPVs are present where.

ipv-dir-permission
octal or perm string

"g=rX,o="

0750

Default PI cache directory permission when ipv-no-setaccess is False and group-execute-only is False.

ipv-umask still applies to directories above and within the IPV directory.

svn-dm
unix path /usr/share/mdx/products/picache/scripts/svn_dm.py Path to executable handling Subversion (SVN) Data Manager (DM) for IPLM Cache. IPLM Cache ships with a default svn_dm.py script. The default shown is for the RPM and DEB packages. For the self-executable run package, the install script sets svn-dm to the path of the svn_dm.py script within the install location.
custom-dm-timeout
integer 3600 (one hour) DM Handler, including SVN DM, executable timeout in seconds. When non-zero, IPLM Cache uses the timeout command to control the time within which the DM Handler executable and svn-dm executable should complete.
ipv-cleanup
True/False False Enables clean up (removal) of unused IPVs. Set to True to have IPLM Cache remove IPVs from the cache that are not used in any Perforce IPLM workspaces. See also ipv-cleanup-days. Defaults to False in which unused IPVs are not removed. If ipv-cleanup is set to True, the pi-server and pi-server-credentials-file settings must be defined. Project specific configuration can override this setting.
ipv-cleanup-days
integer 30

Since IPLM Cache v1.7.0: Number of days to clean up (remove) IPVs which were initially loaded without a workspace (e.g. via 'pi ip publish') and were never used in any workspace. A value of zero means all IPVs without a workspace are removed, regardless of whether or not they were initially loaded without a workspace. Used when ipv-cleanup is set to True. Project specific configuration can override this setting.

Prior to IPLM Cache v1.7.0, ipv-cleanup-days was used by IPLM Cache to automatically remove IPVs from the cache which had not been accessed in this number of days. A value of zero disabled automatic IPV cleanup from cache.

pi-server
string No Default PiServer host:port pair that IPLM Cache accesses in order to clean up unused IPVs. IPLM Server is also used on load and update operations to later facilitate IPV cleanup. Also see pi-server-credentials-file. If ipv-cleanup is set to True, pi-server must be defined. To use PiServer for load and update operations, define pi-server. If the port is missing, a default port of 8080 is used. Project specific configuration can override this setting.
pi-server-credentials-file
unix path No default

Path to file containing the PiServer admin credentials (username and password) IPLM Cache uses to logon to the IPLM Server identified by pi-server. If pi-server is defined, pi-server-credentials-file must be defined. Project specific configuration can override this setting.

The contents of the credentials file consists of two lines, the first line containing only the username, the second line containing only the password.

The credentials file's permissions should be restricted, but make sure IPLM Cache's main process can access the file.

mongodb-credentials-file unix path No default

Path to file containing the MongoDB PiCache user's credentials (username and password) PiCache uses to logon to MongoDB. Has no default. If mongodb-credentials-file is commented out, no credentials will be used with MongoDB.

The contents of the file consists of two lines, the first line containing only the MongoDB PiCache user's username, the second line containing only the password.

The credentials file's permissions should be restricted, but make sure PiCache's main process can access the file.

cache-check-period-hrs
integer 24

The interval at which the cache will be gone through and static IPVs checked for consistency.

A value of zero disables the check.

max-num-concurrent-check-jobs
integer 0 The maximum number of concurrent IPV data consistency check jobs that can run at one time. A value of zero means no limit. Enter a non-zero value to balance IPV data consistency checking with responsiveness to end-user jobs.
worker-user
string mdxadmin

User running worker daemons. Default is IPLM Cache service account "mdxadmin".

worker-group
string mdxadmin

User group running worker daemons. Default is the primary group of worker-user.

By default the data in the cache will be owned by the service account, group ownership will be that user's default group, and permissions will be group read-only, world no access.

Normally a separate group is defined for the cache data and the group sticky bit is set on the top-level directory, so that all data inherits the same group by default. The server can be configured to set different permissions (for example, a customer may wish to give everyone read access to data in the cache).

worker-supp-groups
string

All existing supplemental groups for worker-user

List of supplemental groups, separated by commas, to be assigned to worker-user. Default is all existing supplemental groups, in group ID order, for worker-user up to any limit imposed by the system IPLM Cache's Python executable has an internal limit of 32 supplemental groups).

Use this setting when your assigned worker-user has a number of supplemental groups that exceed the limit and you need to ensure worker-user is in the groups listed in worker-supp-groups.

worker-number
integer 8 Number of job processing worker processes
dm-program
string p4 DM settings
p4-readonly
True/False No Default Whether or not to use the 'readonly' client type for Perforce DMs. This can reduce Perforce database contention.  Valid values are 'True' (use a 'readonly' client) and 'False' (do not use one).  If not explicitly set, IPLM Cache will attempt to query the Perforce server to see if it has been configured to allow 'readonly' clients, using one if possible.  The query command 'p4 configure show' requires Perforce 'super' access, and will be logged as a warning if the IPLM Cache user does not have that privilege.
queuing
ipv/user_rr ipv

Indicates queuing behavior:

  • ipv: (default) jobs are put into queues based on IPV names, and an IPV queue is worked on by a IPLM Cache back-end worker until it is drained; this can be thought of as "IPV-concentration" behavior
  • user_rr: for user-based round-robin queuing behavior, where jobs are put into queues based on the users (user names) that enqueued the jobs and jobs are worked on in a round-robin manner so no one user dominates IPLM Cache work
shutdown-worker-control

discard

wait

require

discard

shutdown control shutdown-worker-control has 3 options:

discard: (default) fail the running job without waiting

wait: wait until the running job is completed itself

requeue: put the running job back in queue

pre-build-hook


unix script path No Default

A user-defined script run on a build operation (for example, pi ip load), executed after creating the new IPV directory but before it is populated. For a P4 DM type IPV, the pre-build hook script is called after the P4 client is created but before the p4 sync is performed. For an SVN or DM Handler type IPV, the pre-build hook script is called before the SVN or DM Handler script is called. The hook script is called with a single argument, which is the full path to the new IPV directory. As of v1.7.0, the Perforce Helix server's P4PORT and P4CLIENT environment variables are set before calling the hook script so that the hook script can use them, if needed. The pre-build-hook-script is often used to apply ACLs on the new IPLM Cache directory, post create but pre populate.

If pre-build-hook is uncommented and the script does not exist or is not executable upon IPLM Cache startup, startup will abort with an error message.

If the script is removed or made non-executable after startup or has a non-zero exit status, including timing out (see pre-build-hook-timeout), the following occurs:

    1. The build process will stop and log an error message
    2. The user will be notified with a message on the command line showing that the IPV load failed along with the reason for failure, e.g. the hook script returned non-zero. 
    3. Workspace status will show the IPV missing after the failed load
    4. The empty IPV directory will be removed from the cache
pre-build-hook-timeout
integer 3600

Gives the period of time, in seconds, IPLM Cache will wait for the pre-build-hook to execute. A value of zero means there is no timeout (IPLM Cache will wait indefinitely for the script to execute). If non-zero, if the pre-build-hook script takes longer than pre-build-hook-timeout seconds to execute, IPLM Cache gives the script a non-zero exit status and fails the build operation.

post-build-hook
unix script path No Default

A user-defined script run on a build operation (for example, pi ip load), executed after the IPV directory is populated (p4 sync is performed or SVN or DM Handler script is called). The hook script is called with a single argument, which is the full path to the new IPV directory. As of v1.7.0, the Perforce Helix server's P4PORT and P4CLIENT environment variables are set before calling the hook script so that the hook script can use them, if needed. 

If post-build-hook is uncommented and the script does not exist or is not executable upon IPLM Cache startup, startup will abort with an error message.

If the script is removed or made non-executable after startup or has a non-zero exit status, including timing out (see post-build-hook-timeout), the build fails identically as the pre-build-hook failing (see pre-build-hook).

Note that if a post-build-hook is defined, IPLM Cache will not set permissions on the IPV cache directory, this has to be handled by the hook script. See the IPLM Cache administration overview page discussing IPLM Cache Permissions and the use of the post-build-hook.

post-build-hook-timeout


integer 3600 Gives the period of time, in seconds, IPLM Cache will wait for the post-build-hook to execute. A value of zero means there is no timeout (IPLM Cache will wait indefinitely for the script to execute). If non-zero, if the post-build-hook script takes longer than post-build-hook-timeout seconds to execute, IPLM Cache gives the script a non-zero exit status and fails the build operation.
pre-update-hook unix script path No Default

A user-defined script run on an update operation (for example, pi update), executed before the cache contents are updated. For a P4 DM type IPV, the pre-update hook script is called before the p4 sync is performed. For an SVN or DM Handler type IPV, the pre-update hook script is called before the SVN or DM Handler script is called. The hook script is called with a single argument, which is the full path to the new IPV directory.

If pre-update-hook is uncommented and the script does not exist or is not executable upon IPLM Cache startup, startup will abort with an error message.

If the script is removed or made non-executable after startup or has a non-zero exit status, including timing out (see pre-build-hook-timeout), the following occurs:

    1. The update process will stop and log an error message
    2. The IPV contents in the cache will not be updated
    3. For a pi update operation, the user will be notified with a message on the command line showing that the IPV update failed along with the reason for failure, e.g. the hook script returned non-zero. 
    4. Workspace status will show an update is needed
pre-update-hook-timeout integer 3600 Gives the period of time, in seconds, IPLM Cache will wait for the pre-update-hook to execute. A value of zero means there is no timeout (IPLM Cache will wait indefinitely for the script to execute). If non-zero, if the pre-update-hook script takes longer than pre-update-hook-timeout seconds to execute, IPLM Cache gives the script a non-zero exit status and fails the update operation.
post-update-hook unix script path No Default

A user-defined script run on an update operation (for example, pi update), executed after the cache contents are updated (p4 sync is performed or SVN or DM Handler script is called). The hook script is called with a single argument, which is the full path to the new IPV directory. 

If post-update-hook is uncommented and the script does not exist or is not executable upon IPLM Cache startup, startup will abort with an error message.

If the script is removed or made non-executable after startup or has a non-zero exit status, including timing out (see post-update-hook-timeout), the following occurs:

    1. The update process will stop and log an error message
    2. The IPV contents in the cache will be updated
    3. For a pi update operation, the user will be notified with a message on the command line showing that the IPV update failed along with the reason for failure, e.g. the hook script returned non-zero. 
    4. For a pi release operation, a new release will successfully be created, but there will be no error message about IPLM Cache's update operation having failed
    5. IPLM Cache will remove the IPV's reference in MongoDB so it will no longer think the IPV is in the cache, however the IPV's contents will still be in the cache. To recover from this, make sure the post-update-hook script returns a zero exit status before the post-update-hook-timeout value and run pi ip publish on the IPV, afterwards verifying picache-ipv-admin.sh --list --all shows the IPV in the cache.

Note that if a post-update-hook is defined, IPLM Cache will not set permissions on the IPV cache directory, this has to be handled by the hook script. See the IPLM Cache administration overview page discussing IPLM Cache Permissions and the use of the post-update-hook.

post-update-hook-timeout integer 3600 Gives the period of time, in seconds, IPLM Cache will wait for the post-update-hook to execute. A value of zero means there is no timeout IPLM Cache will wait indefinitely for the script to execute). If non-zero, if the post-update-hook script takes longer than post-update-hook-timeout seconds to execute, IPLM Cache gives the script a non-zero exit status and fails the update operation.
build-ipv-wait-time
integer 60

IPV contents wait time. On an IPV build (ip load) operation, build-ipv-wait-time specifies the time in seconds to wait for the IPV contents to become available, for example on a Perforce edge server. Defaults to 60 seconds. A value of zero will cause an indefinite wait time.

check-signature
True/False False

Check request signature. When enabled, IPLM Cache server checks and verifies the signature field in cache build/update requests before further processing the request default is not to check signature

log-file-backup-count
integer 1 log file rotation backup file number
log-file-maxbytes
integer 10485760 log file rotation backup file size (in bytes)
log-timestamp-utc
True/False False use UTC timestamp in log messages
log-target-backend
mongodb/syslog mongodb

Log target for IPLM Cache backend processes other than the main process.

If mongod-host is commented out, this must be set to syslog.

syslog-address
string localhost:514

Unix syslog connection address.

Use a different host:port to log to a different host and/or port.

Use a file path, like /dev/log, to specify a Unix domain socket to send the messages to.

syslog-socktype
udp/tcp udp Unix syslog socket type.
syslog-facility

auth/authpriv/cron/
daemon/ftp/kern/
lpr/mail/news/syslog/
user/uucp/local0/
local1/local2/local3/
local4/local5/local6/
local7

user Unix syslog facility to use for IPLM Cache messages.
log-mongodb-ttl-days
integer 30

Time-To-Live, in days, for IPLM Cache logs in the MongoDB database.

Must be between 1 and 360.

Used when log-target-backend = mongodb (mongod-host must be uncommented).

mongod-logs-write-ack
True/False True

Indicates if IPLM Cache is to wait for writes to be acknowledged from MongoDB upon writing to the logs collection. Setting to False gives better performance with the trade-off of not detecting errors.

Used when log-target-backend = mongodb (mongod-host must be uncommented).

mongod-server-selection-timeout-ms
integer 2000 Controls how long, in milliseconds, IPLM Cache will wait to find an available MongoDB server. Defaults to 2000 (2 seconds). Used when mongod-host is uncommented.
node
string <hostname>

Node name. If not specified, it defaults to host name.

compression
True/False False

Uncomment to enable filelist compression between IPLM Client and IPLM Cache.

Pi Client v2.35.1 and later use this setting in IPLM Cache. Earlier IPLM Client versions enable client-side compression by setting the Pi Client configuration file's 'use_compression' setting, in the file's [PICACHE] section, to True.

redis-credentials-file unix path No default

Path to file containing the Redis Server and Redis Sentinel passwords PiCache uses to log on to Redis Server and Redis Sentinel. Has no default. If redis-credentials-file is commented out, no credentials will be used with Redis Server or Redis Sentinel.

The contents of the file consists of one or two lines containing Redis's passwords. The first line contains the Redis Server password, from mdx-backend-redis.conf's requirepass directive. If Redis Sentinel is used (Redis is in an HA configuration) and it has authentication enabled, the second line contains the Redis Sentinel password, from mdx-backend-sentinel.conf's requirepass directive, even if it is the same password as the Redis Server password.

The credentials file's permissions should be restricted, but make sure PiCache's main process can access the file.

redis-host
string No default Redis host name:port pair when a single Redis instance is used. One of redis-host and redis-sentinel-instances setting is required. If redis-host is used, any redis-sentinel-instances setting is ignored.
redis-sentinel-instances
string None

List of three or more Redis Sentinel host name:port pairs, separated by a comma. Used when Redis is deployed in a High Availability (HA) configuration. redis-sentinel-instances is ignored if redis-host is used. Prior to v1.7.0, only three Redis Sentinel instances could be set.

Example configuration:

redis-sentinel-instances = 10.211.55.6:26379, 10.211.55.7:26379, 10.211.55.8:26379

redis-sentinel-master
string None

Redis Sentinel master service name. Used when Redis is deployed in a High Availability (HA) configuration. The configuration is required if redis-sentinel-instances is used and ignored if redis-host is used.

redis-sentinel-socket-timeout
floating point number None (no timeout)

Number of seconds to timeout waiting for data from a Redis Sentinel instance. An optional setting when Redis is deployed in a High Availability (HA) configuration. redis-sentinel-socket-timeout is ignored if redis-sentinel-instances is not used. If redis-sentinel-instances is used and redis-sentinel-socket-timeout is not used, then IPLM Cache will wait indefinitely for data from a Redis Sentinel instance.

mongod-host
string None

MongoDB host information. Either a single host:port pair for a single-node configuration or a list of three host:port pairs, separated by a comma, for an HA configuration. The port is optional; if not present, it defaults to 27017.

If mongod-host is commented out, IPLM Cache will not use MongoDB meaning:

  • The IPV auto-cleanup feature will be disabled, as if the ipv-cleanup-days setting were set to zero
  • The IPV data consistency check feature will be disabled, as if the cache-check-period-hrs setting were set to zero
  • If the log-target-backend setting is not set to syslog, IPLM Cache will abort startup
mongod-rs
string None

MongoDB replica set name for an HA configuration. Default is None, for a single-node configuration.

Not parsed if mongod-host is commented out or only contains one host.

heartbeat-base-key
string mdx:picache:heartbeat:

heartbeat control (for HA) Base key name for heartbeat data structure on HA Redis Master.

The node name is appended to this base key for the full key name.

heartbeat-healthy-message
string good Gives message in heartbeat data structure that indicates a node is healthy.
heartbeat-unhealthy-message
string bad Gives message in heartbeat data structure that indicates a node is unhealthy.
heartbeat-down-message
string down Gives message in heartbeat data structure that indicates a node is going down.
heartbeat-interval
integer 0 (no checks done)

Nominal interval, in seconds, to check for nodes' heartbeats. Default is zero, meaning no checks are done.

heartbeat-loss-max
integer 5 Number of successive missing heartbeats to claim a work node is down.
prometheus-metrics-enable
True/False False

Indicate if IPLM Cache is to expose its metrics to Prometheus (True) or not (False).

statsd-metrics-port
integer 9125

The port IPLM Cache uses to send StatsD metrics to if prometheus-metrics-enable is True.

The Prometheus statsd_exporter is used to send metrics to and is scraped by the Prometheus server to get these metrics.

maintenance-period-hrs
floating point number 1.0 The interval, in hours, at which IPLM Cache maintenance activities (IPV cleanup and data consistency checks) will sleep for before running again. A value of zero disables maintenance activities: ipv-cleanup must then be set to False and cache-check-period-hrs must be set to zero. A value between 0 and 1 can also be entered, like 0.5 to have the maintenance task sleep for 30 minutes.
maintenance-job-wait-mins
floating point number 10.0 The period of time, in minutes, IPLM Cache will wait for the remove and check jobs enqueued by the cleanup and data consistency check activities. A value of zero means no waiting is done.

Watchdog Section

Options for Watchdog Monitor/Heartbeat component. Any command-line options will override these options.

Setting Default Description
log-file
/var/log/mdx-picache/wdog.log

Use log-file here to override log-dir in the [main] section and the default Watchdog Monitor log file name, if desired.

log-level
warning

Set the minimum logging level. Options are 'debug', 'info', and 'warning'.

ping-period 20

Maximum period, in seconds, expected between watchdog pings. The default is 20. Must be greater than or equal to 5 and less than or equal to 120. Can be overridden by client ping command to Watchdog Monitor.

profile False

Indicate whether Watchdog Monitor only profiles or profiles and alerts the client state. Options are 'true' (to do profiling) and 'false' (to not profile). Default = false.

prometheus-metrics-enable False

Indicate if the Watchdog Monitor is to expose its metrics to Prometheus (True) or not (False). The default is False.

prometheus-metrics-port 2005

Give the port to expose Prometheus metrics on if prometheus-metrics-enable is True.

Multi-Project Configuration

IPLM Cache can be configured to support multiple projects by adding a section for each project in the configuration file. The project name is defined in the section header.
Each project has a separate top-level directory which can be:

  1. A relative path under the root directory
    OR
  2. An absolute path to a different location
Multi-Project Configuration
[project_name]
root = <project_root_directory>

If IPLM Cache is configured to support projects, all client requests must identify the project to use. Set the project with the MDX_PROJECT environment variable.

The following options are also supported in the project section:

  • ipv-umask

  • ipv-no-setaccess

  • group-execute-only
  • ipv-dir-permission

  • ipv-cleanup-days

  • static-mount
  • use-picache-managed-file
  • pre-build-hook-timeout
  • post-build-hook-timeout
  • pre-update-hook-timeout
  • post-update-hook-timeout
  • build-ipv-wait-time

Perforce Edge Server Wait Time

When using a Perforce Edge Server in your deployment, when an IPLM Cache build operation is done (for example, by using the pi ip load command) the IPV's files may not yet be replicated to the edge server. In this case, IPLM Cache's build-ipv-wait-time option comes into play:

Multi-Project Configuration
# IPV contents wait time
# On an IPV build (ip load) operation, build-ipv-wait-time specifies the
# time in seconds to wait for the IPV contents to become available, for
# example on a Perforce edge server. Defaults to 60 seconds. A value of zero
# will cause an indefinite wait time.
#build-ipv-wait-time = 60

By default, IPLM Cache will wait up to one minute for the IPV's files to become available before failing the operation. This option can be fine-tuned for your deployment. A setting of zero will cause IPLM Cache to wait indefinitely for the IPV's files.

picache.conf example

[main]

# Uncomment to enable debug logging.
#debug-enable = yes

# PI cache root directory
root = /picache-root

# PI cache site
site = local

# PI cache server listening port
port = 5000

# IPV file umask
# default: 027
#ipv-umask = 027

# Disable access controls on PI cache directories
# default is False
ipv-no-setaccess = True

# Indicates if the cache directory structure leading to an IP is to have
# group-execute-only permissions. Defaults to False. If set to True and if
# ipv-no-setaccess is False, sets directories from the root directory, but not
# including the root directory, to the directory containing the IP to a
# permission of 710. This makes it so that the cache structure is opaque to
# different users, but users in the same group have access to the IPVs as long
# as they know which IPVs are present where.
#group-execute-only = False

# Default PI cache directory permission when ipv-no-setaccess is False and
# group-execute-only is False
# Octal permissions
#ipv-dir-permission = 0755
# Permissions string
#ipv-dir-permission = "g=rX,o="# Enables clean up (removal) of unused IPVs. Set to True to have IPLM Cache remove
# IPVs from the cache that are not used in any Perforce IPLM workspaces. See
# also ipv-cleanup-days.
# Defaults to False in which unused IPVs are not removed.
# If ipv-cleanup is set to True, the pi-server and pi-server-credentials-file
# settings must be defined.
# Project specific configuration can override this setting.
ipv-cleanup = True

# Number of days to clean up (remove) IPVs which were initially loaded without
# a workspace (e.g. via 'pi ip publish') and were never used in any workspace.
# Defaults to 30. A value of zero means all IPVs without a workspace are
# removed, regardless of whether or not they were initially loaded without a
# workspace. Used when ipv-cleanup is set to True.
# Project specific configuration can override this setting.
ipv-cleanup-days = 30

# host:port of PiServer that IPLM Cache accesses in order to clean up unused
# IPVs. Also used on load and update operations to later facilitate IPV
# cleanup. Also see pi-server-credentials-file.
# Has no default. If ipv-cleanup is set to True, pi-server must be defined.
# To use PiServer for load and update operations, define pi-server. If the
# port is missing, a default port of 8080 is used.
# Project specific configuration can override this setting.
pi-server = localhost:8080

# Path to file containing the PiServer admin credentials (username and
# password) IPLM Cache uses to logon to the PiServer identified by pi-server.
# Has no default. If pi-server is defined, pi-server-credentials-file must be
# defined.
# The contents of the file consists of two lines, the first line containing
# only the username, the second line containing only the password.
# Project specific configuration can override this setting.
# The credentials file's permissions should be restricted, but make sure
# IPLM Cache's main process can access the file.
pi-server-credentials-file = /home/bob/.methodics/credentials.txt

# The interval at which the cache will be gone through and static IPVs checked
# for consistency. Defaults to 24 hours. A value of zero disables the check.
#cache-check-period-hrs = 24

# The maximum number of concurrent IPV data consistency check jobs that can run
# at any one time. A value of zero means no limit. Defaults to zero. Give a
# non-zero value to balance IPV data consistency checking with responsiveness to
# end-user jobs.
max-num-concurrent-check-jobs = 4

# User running worker daemons
# default is PI cache service account mdxadmin
worker-user = mdxadmin

# User group running worker daemons
# default is the primary group of worker-user
worker-group = mdxadmin

# List of supplemental groups, separated by commas, to be assigned to
# worker-user. Default is all existing supplemental groups, in group ID order,
# for worker-user up to any limit imposed by the system IPLM Cache's Python
# executable has an internal limit of 32 supplemental groups). Use this setting
# when your assigned worker-user has a number of supplemental groups that
# exceed the limit and you need to ensure worker-user is in the groups listed
# in worker-supp-groups.
#worker-supp-groups = group1, group2, group3

# Number of job processing worker processes
# default is 8
#worker-number = 8

# DM settings
dm-program = /usr/share/mdx/products/piextras/perforce/current/bin/p4

# queuing behavior
# queuing has 2 options:
# ipv: (default) jobs are put into queues based on IPV names, and an IPV
# queue is worked on by a IPLM Cache back-end worker until it is drained;
# this can be thought of as "IPV-concentration" behavior
# user_rr: for user-based round-robin queuing behavior, where jobs are put
# into queues based on the users (user names) that enqueued the jobs
# and jobs are worked on in a round-robin manner so no one user
# dominates IPLM Cache work
queuing = user_rr

# shutdown control
# shutdown-worker-control has 3 options:
#   discard: (default) fail the running job without waiting
#   wait: wait until the running job is completed itself
#   requeue: put the running job back in queue
shutdown-worker-control = requeue

# pre-build hook
# a user-defined script can be configured to run after creating
# the new IPV directory and the P4 client but before running p4 sync.
# The trigger script will be called with a single argument:
#   The full path to the new iplv directory
#pre-build-hook =

# pre-build hook timeout
# when not zero, IPLM Cache uses timeout command to control the time within which
# the hook script should complete
# default is 3600 seconds
#pre-build-hook-timeout = 3600

# post-build hook
# a user-defined script can be configured to run after running p4 sync.
# The trigger script will be called with a single argument:
# The full path to the new iplv directory
#post-build-hook =

# post-build hook timeout
# when not zero, IPLM Cache uses timeout command to control the time within which
# the post-build hook script should complete
# default is 3600 seconds
#post-build-hook-timeout = 3600

# IPV contents wait time
# On an IPV build (ip load) operation, build-ipv-wait-time specifies the
# time in seconds to wait for the IPV contents to become available, for
# example on a Perforce edge server. Defaults to 60 seconds. A value of zero
# will cause an indefinite wait time.
#build-ipv-wait-time = 60

# check request signature
# When enabled, IPLM Cache server checks and verifies the signature field
# in cache build/update requests before further processing the request
# default is not to check signature
#check-signature = False

# log file rotation backup file number
# default is 1
#log-file-backup-count = 1

# log file rotation backup file size (in bytes)
# default is 10485760
#log-file-maxbytes = 10485760

# use UTC timestamp in log messages
# Note that the timestamp in IPLM Cache access log (4th field in log file) is
# the time that the request was received. It's not affected by this
# configuration option.
# default is False
# log-timestamp-utc = False

# Log target for IPLM Cache backend processes other than the main process.
# One of:
# 'mongodb' to log to MongoDB picache database log collection
# 'syslog' to log to a Unix syslog
# Default is MongoDB.
# If mongod-host is commented out, this must be set to syslog.
#log-target-backend = syslog

# Unix syslog connection address.
# Defaults to 'localhost:514'.
# Use a different host:port to log to a different host and/or port.
# Use a file path, like /dev/log, to specify a Unix domain socket to send the
# messages to.
#syslog-address = localhost:514
#syslog-address = /dev/log

# Unix syslog socket type: one of 'udp' and 'tcp'.
# Defaults to udp.
#syslog-socktype = tcp

# Unix syslog facility to use for IPLM Cache messages: one of 'auth', 'authpriv',
# 'cron', 'daemon', 'ftp', 'kern', 'lpr', 'mail', 'news', 'syslog', 'user',
# 'uucp', and 'local0' to 'local7'.
# Defaults to 'user'.
#syslog-facility = user

# Time-To-Live, in days, for IPLM Cache logs in the MongoDB database.
# Defaults to 30 (days). Must be between 1 and 360.
# Used when log-target-backend = mongodb (mongod-host must be uncommented).
#log-mongodb-ttl-days = 30

# Indicate if IPLM Cache is to wait for writes to be acknowledged from MongoDB
# upon writing to the logs collection. Default is True. Setting to False gives
# better performance with the trade-off of not detecting errors.
# Used when log-target-backend = mongodb (mongod-host must be uncommented).
#mongod-logs-write-ack = False

# Controls how long (in milliseconds) IPLM Cache will wait to find an
# available, appropriate MongoDB server. Defaults to 2000 (2 seconds).
# Used when mongod-host is uncommented.
#mongod-server-selection-timeout-ms = 2000

# node name
# if not specified, it defaults to host name
#node = localhost

# Uncomment to enable filelist compression between Pi Client and IPLM Cache.
# Pi Client v2.35.1 and later use this setting in IPLM Cache. Earlier IPLM Client
# versions enable client-side compression by setting the IPLM Client configuration
# file's 'use_compression' setting, in the file's [PICACHE] section, to True.
#compression = True

# redis host name:port
# default is None
redis-host = localhost:6379

# redis sentinel instances
# list of three name:port pairs, separated by a comma
# the configuration is ignored if redis-host is configured
# default is None
#redis-sentinel-instances = 10.211.55.6:26379, 10.211.55.7:26379, 10.211.55.8:26379
# if redis-sentinel-instances is not defined, then redis-sentinel-master
# will not be parsed
#redis-sentinel-master = mymaster
# if redis-sentinel-instances is not defined, then
# redis-sentinel-socket-timeout will not be parsed
# default is None
#redis-sentinel-socket-timeout = 1

# MongoDB host information.
# Either a single name:port pair for a single-node configuration, or a
# list of three name:port pairs, separated by a comma, for an HA configuration.
# Port is optional; if not present it defaults to 27017.
# Default is None
mongod-host = localhost:27017
#mongod-host = 10.211.55.6:27017, 10.211.55.7:27017, 10.211.55.8:27017

# MongoDB replica set name for an HA configuration.
# Default is None, for a single-node configuration.
# Not parsed if mongod-host is commented out or only contains one host.
#mongod-rs = rs0

# heartbeat control (for HA)

# Base key name for heartbeat data structure on HA Redis Master. The node name
# is appended to this base key for the full key name.
# Default = mdx:picache:heartbeat:
#heartbeat-base-key = mdx:picache:heartbeat:

# Gives message in heartbeat data structure that indicates a node is healthy.
# Default = good.
#heartbeat-healthy-message = good

# Gives message in heartbeat data structure that indicates a node is unhealthy.
# Default = bad.
#heartbeat-unhealthy-message = bad

# Gives message in heartbeat data structure that indicates a node is going down.
# Default = down.
#heartbeat-down-message = down

# Nominal interval, in seconds, to check for nodes' heartbeats.
# Default is zero, meaning no checks are done.
heartbeat-interval = 10

# number of successive missing heartbeats to claim a work node is down
heartbeat-loss-max = 5

# Indicate if IPLM Cache is to expose its metrics to Prometheus (True)
# or not (False). Default = False.
prometheus-metrics-enable = True

# The port IPLM Cache uses to send StatsD metrics to if
# prometheus-metrics-enable is True. Defaults to 9125. The Prometheus
# statsd_exporter is used to send metrics to and is scraped by the Prometheus
# server to get these metrics.
#statsd-metrics-port = 9125

# The interval, in hours, at which IPLM Cache maintenance activities (cleanup
# and data consistency checks) will sleep for before running again. Defaults to
# 1 hour. A value of zero disables maintenance activities.
#maintenance-period-hrs = 1

# The period of time, in minutes, IPLM Cache will wait for the remove and check
# jobs enqueued by the cleanup and data consistency check activities.
# Defaults to 10 minutes. A value of zero means no waiting is done.
#maintenance-job-wait-mins = 10

[watchdog]
# Options for Watchdog Monitor/Heartbeat component. Any command-line options
# will override these options.

# Set the minimum logging level. Options are 'debug', 'info', and 'warning'.
# Default = warning.
#log-level = info

# Maximum period, in seconds, expected between watchdog pings. Default = 20.
# Must be greater than or equal to 5 and less than or equal to 120.
# Can be overridden by client ping command to Watchdog Monitor.
#ping-period = 10

# Indicate whether or not Watchdog Monitor is to only profile clients. Options
# are 'true' (to do profiling) and 'false' (to not profile). Default = false.
#profile = true

# Indicate if the Watchdog Monitor is to expose its metrics to Prometheus (True)
# or not (False). Default = False.
prometheus-metrics-enable = True

# Give the port to expose Prometheus metrics on if prometheus-metrics-enable is
# True. Defaults to 2005.
#prometheus-metrics-port = 2005