IPLM Cache administration overview
Learn how to perform common administrative tasks for IPLM Cache.
IPLM Cache server configuration
IPLM Cache server reads the picache.conf configuration file on startup. The default location for the configuration file is /etc/mdx/picache.conf. See picache.conf configuration file for details.
Restart IPLM Cache to apply changes in the configuration file.
Enabling IPLM Cache
IPLM Cache is enabled by setting the MDX_PICACHE_SERVER environment variable to point to the site IPLM Cache server, or via settings in the picache.conf file. Refer to the Client Configuration section for more details.
Configuring IPLM Cache for use with Perforce Core Server
-
Place a .p4trust file in the mdxadmin home directory.
-
Create a .p4config file with a P4USER set to the user that IPLM Cache should use to log into Perforce Core Server.
-
Define a long lived tickets group in Perforce Core Server with this user included, so that the login ticket does not expire.
-
Create a .p4tickets file with a long lived ticket stored so that IPLM Cache can login.
-
Log in as mdxadmin user, and make sure you are in their home directory. If you are root, this can be done with:
sudo -u mdxadmin -i
Then use this example code to set a P4CONFIG environment variable for the cache user, so it can look for and find the .p4config file:
Copycd ~mdxadmin
echo "P4USER=mdxuser" >> .p4config
echo "export P4CONFIG=.p4config" >> .bashrc
p4 -Pssl:perforce:1666 trust -fy
p4 -Pssl:perforce:1666 -u mdxuser loginThe above code performs the following steps:
-
Change to mdx-admin home directory.
-
Add P4USER configuration to the .p4config file.
-
Ensure P4CONFIG variable is defined for the mdxadmin user’s environment.
-
(optional) Create a .p4trust file for the specified Perforce server. Only needed if server is using SSL.
-
Log in to specified Perforce server as the user, which creates an entry in the .p4tickets file.
-
IPLM Cache Watchdog
The IPLM Cache Watchdog monitors the status of IPLM Cache workers and other processes and communicates that to peer watchdog monitors and the Linux Watchdog Daemon. If any component is unhealthy, the Linux Watchdog Daemon can initiate a reboot of the node to try to fix the problem. When a IPLM Cache watchdog monitor is stopped, it communicates its 'down' state to peers so that a peer can handle cleanup of the downed node.
Download the files for giving additional information and sample configuration files for configuring the Linux Watchdog Monitor:
- A README-HA-CONFIG file with instructions on setting up the Linux Watchdog Daemon
- A Linux Watchdog Daemon configuration file, watchdog.conf, that can be used as-is
- A Linux Watchdog Daemon test executable, mdx_wdog_stat_wrapper.sh, that can be modified to interface properly with the IPLM Cache Watchdog Monitor
- A Linux Watchdog Daemon init script, watchdog
As of IPLM Cache v1.6.2 a tool is included in the /usr/share/mdx/products/picache/bin directory, picache-wdog-unregister.sh, which can be used if the Watchdog continually reports a client has stopped pinging. This could happen, for example, on a Redis Master instance failover where a client attempts to unregister with the watchdog during the failover but the watchdog doesn't receive the client's message.
$ which picache-wdog-unregister.sh /usr/share/mdx/products/picache/bin/picache-wdog-unregister.sh $ picache-wdog-unregister.sh --help usage: /usr/share/mdx/products/picache/local/lib/python2.7/site-packages/methodics/picache_server/tools/wdog_unregister.pyc [-h] [-v] [-c CONF] entity_name [entity_name ...] IPLM Cache Watchdog entity unregister tool positional arguments: entity_name one or more entity names to unregister optional arguments: -h, --help show this help message and exit -v, --verbose Verbose output -c CONF, --conf CONF IPLM Cache config file, default: /etc/mdx/picache.conf $ picache-wdog-unregister.sh -v test1 test2 args = Namespace(conf='/etc/mdx/picache.conf', entity_name=['test1', 'test2'], verbose=True) found master, 10.211.55.18:6379 Publishing message: -test1 Publishing message: -test2
IPLM Cache server monitoring
See the Perforce IPLM Metrics page for information on setting up IPLM Cache metrics monitoring.
IPLM Cache permissions
By default, data in the cache is writeable only by the IPLM Cache service account, with group read-only and no world access permissions.
To change the default umask used by the IPLM Cache workers, set ipv-umask in the configuration file.
To override the default permissions for the top-level directories of IPVs in the cache area, set ipv-dir-permission in the configuration file.
To make the cache directory structure leading to an IPV have group-execute-only permissions, set group-execute-only in the configuration file. This makes it so that the cache structure is opaque to different users, but users in the same group have access to the IPVs as long as they know which IPVs are present where.
To prevent the server from setting file permissions or group ownership, set ipv-no-setaccess in the configuration file.
IPLM Cache v1.6.0 introduced support for a post-build-hook script run after the cache is populated. As of v1.6.2, if a post-build-hook script is used, IPLM Cache will not set an IPV's permissions after it is loaded in order to improve performance. Similarly, as of v1.9.4, if a post-update-hook script is used, IPLM Cache will also not set an IPV's permissions after it is updated. If permissions are required to be set, the post-build-hook and post-update-hook scripts will need to do that, as demonstrated by the following example post-build-hook script:
#!/bin/bash ipv_group=group129 ipv_perms="g=rX,o=r"group=$(stat -c "%G" $1) echo "Changing IPV contents group ownership from group $group to $ipv_group..." sg $ipv_group "chgrp -c -R $ipv_group $1" echo "Changing IPV directory's permissions to $ipv_perms..." sg $ipv_group "chmod $ipv_perms $1" dir_name=$(dirname "$1") while [ "$dir_name" != "/picache-root" ] do echo "Changing directory $dir_name permissions to 710..." chmod 710 "$dir_name" dir_name=$(dirname "$dir_name") done exit 0
Group ownership
By default, all users must belong to a common group (for example, 'users'), and the IPLM Cache service account must be a member of that group.
You can limit a given IP to be accessible only by members of a UNIX group by setting the 'unix_group' project property on an IP or Library. See the Workspace Configuration page in the User Guide for more information.
An IP with no group setting will be owned by the default/project group in the cache.
Only the top-level directory of the IP has the new group ownership - this is enough to prevent non-group members from accessing any IP data.
Adding a new group
If a unix group does not yet exist, you will need to create it with the list of members that will have access to the IP. Depending on your IT infrastructure, you may need to request help to create/modify Unix groups.
The IPLM Cache service account must be a member of all groups defined in IPs. It uses the 'sg' command to wrap group operations, so it is not necessary to restart the server after adding the user to a new group.
Starting and stopping IPLM Cache
IPLM Cache server and its dependencies will start automatically when the server is booted and will be restarted as required during upgrade.
Starting IPLM Cache manually
> service picache start
Restarting IPLM Cache
> service picache restart
Stopping IPLM Cache
> service picache stop
Cleaning up unused IPVs
Automatic cleanup
IPLM Cache can be configured to automatically remove unused IPVs to use cache space efficiently. This feature requires IPLM Cache use of MongoDB, which as of release v1.6.0 is optional (see picache.conf Configuration file for the mongod-host
setting).
With IPLM Cache v1.7.0, the concept of an "unused IPV" has changed. Now, an IPV in the cache is automatically removed when the ipv-cleanup
setting is set to True,
the pi-server
and pi-server-credentials-file
settings are set properly (see picache.conf Configuration file for more information on these settings), and:
- The IPV was loaded into the cache with a corresponding IPLM workspace, or it was published to the cache without an IPLM workspace (e.g. via 'pi ip publish') but then afterwards a workspace referred to the published IPV, and then later there are no IPLM workspaces that refer to the IPV. In this case, the IPV will be removed from the cache the next time the IPLM Cache maintenance process wakes up and performs its activities.
- The IPV was published to the cache without an IPLM workspace (e.g. via 'pi ip publish'), never had a workspace refer to the IPV, and the period of time given by the
ipv-cleanup-days
setting has expired. Again, the IPV will be removed from the cache the next time the IPLM Cache maintenance process wakes up and performs its activities.
Prior to IPLM Cache v1.7.0, an IPV was considered unused if it had not been accessed for the period of time given by the ipv-cleanup-days
setting, where Pi Client notifies IPLM Cache that the IPVs in a workspace are being used on each load and update operation, thus resetting the access clock used for cleanup.
An IPV that has been removed is recreated when next requested by a client.
Manual Cleanup
IPVs can be removed from the cache with the "pi-admin picache remove" command. See the pi-admin Global Configuration page for details on the pi-admin command.
Filelist compression IPLM Client to IPLM Cache
The data sent to IPLM Cache from the PWM Client over HTTP includes file lists. These file lists can be large for bigger IPVs. In order to minimize the amount of data sent, Pi Client can be configured to compress the file list to IPLM Cache, and IPLM Cache can be configured to decompress the file list it receives (zlib compression and decompression is used). To enable the use of compression, in the picache.conf configuration file, in the [main]
section, set the compression
configuration item to True
. IPLM Cache v1.5.0 and later communicate this setting to Pi Client and Pi Client v2.35.1 and later use this setting.
# Uncomment to enable filelist compression between Pi Client and IPLM Cache. # IPLM Client v2.35.1 and later use this setting in IPLM Cache. Earlier IPLM Client # versions enable client-side compression by setting the IPLM Client configuration # file's 'use_compression' setting, in the file's [PICACHE] section, to True. compression = True
For earlier versions of IPLM Client, to enable the use of compression client-side, in the piclient.conf configuration file, in the [PICACHE]
section, set the use_compression
configuration item to True
:
## Uncomment to enable filelist compression client-side when enabled on ## server-side. ## This can also be set through the MDX_PICACHE_COMPRESSION environment ## variable. ## From version 2.35.1 and later, we use the IPLM Cache 1.5 (and up) compression ## setting and the client compression setting will be ignored. For IPLM Cache ## version 1.4.1 and older, IPLM Client's compression setting will be used. ## Server-side compression is enabled by setting the IPLM Cache configuration ## file's 'compression' setting, in the file's [main] section, to True. use_compression = True
Cleanup of pending jobs in job queues
To clean up pending jobs in the job queues, first stop IPLM Cache:
service picache stop
Then run the picache-job-purge.sh tool, as of v1.6.0, or the job_purge.pyc tool for versions prior to v1.6.0:
/usr/share/mdx/products/picache/bin/picache-job-purge.sh or /usr/share/mdx/products/picache/local/bin/python /usr/share/mdx/products/picache/local/lib/python2.7/site-packages/methodics/picache_server/tools/job_purge.pyc
IPLM CachePerforce settings
- For best performance it is recommended to follow Perforce Performance Tuning.
- Please review Important Perforce Server Configuration Variables.
- Setting client.readonly.dir will help to improve Perforce performance, and prevent accidental edits in cache clients.
- See also the
p4-readonly
configuration setting in the picache.conf Configuration file page.
SSL termination
IPLM Cache, the version of Redis in the mdx-backend-redis
package, and the version of MongoDB in the mdx-backend-mongodb
package do not natively support TLS/SSL. Another technology, like HAProxy or Stunnel, must be used to provide SSL termination. See Security for more information.