Performance FAQs
As IP catalogs grow into 100K or more IPs, unfiltered open ended list commands can generate more data than is useful, and tie up system resources unnecessarily.
Perforce IPLM provides a number of ways to limit the output generated by list commands on the CLI.
Setting a Platform Default Limit
The 'max_list_records_limit' setting in the $MDX_CONFIG_DIR/piclient.conf file can be used to limit the output of 'pi ip list' and 'pi ws list' to the number of results desired
## Set the max number of records returned by pi ip ls and pi ws ls. ## This can also be set through the MDX_MAX_LIST_RECORDS_LIMIT environment variable. max_list_records_limit = 1000
Any number of results beyond the value set will be truncated by default, and a warning message will be provided to the user.
Setting a Record Limit by Command
The 'pi ip list' and 'pi workspace list' commands support the --limit option which will limit the results returned from either of these commands to the number of results specified. The --limit option overrides the 'max_list_records_limit' value.
Searching results
--match-substring search
The output of 'pi ip list' and 'pi workspace list' support the --match-substring option which will perform a simple substring search across the fields output by these commands. The fields searched are dependent on the command configuration in the piclient.conf file.
> pi ip list --all --match-substring pad ┌──────────────────────────┬────┬─────────────┬────────────┬───────────────────────────────┬────────────────────┐ │ NAME │ DM │ ALIASES │ CREATED BY │ CREATED ON │ VERSION MESSAGE │ ╞══════════════════════════╪════╪═════════════╪════════════╪═══════════════════════════════╪════════════════════╡ │ tutorial.padring@1.L1 │ P4 │ HEAD LATEST │ admin │ 2020-01-10 11:01:31 -0800 PST │ padring L1 release │ │ tutorial.padring@0.L1 │ P4 │ │ admin │ 2020-01-10 11:01:26 -0800 PST │ variant L1 │ │ tutorial.padring@1.L2 │ P4 │ HEAD LATEST │ admin │ 2020-01-10 11:01:34 -0800 PST │ padring L2 release │ │ tutorial.padring@0.L2 │ P4 │ │ admin │ 2020-01-10 11:01:27 -0800 PST │ variant L2 │ │ tutorial.padring@1.L3 │ P4 │ HEAD LATEST │ admin │ 2020-01-10 11:01:38 -0800 PST │ padring L3 release │ │ tutorial.padring@0.L3 │ P4 │ │ admin │ 2020-01-10 11:01:27 -0800 PST │ variant L3 │ │ tutorial.padring@1.L4 │ P4 │ HEAD LATEST │ admin │ 2020-01-10 11:01:42 -0800 PST │ padring L4 release │ │ tutorial.padring@0.L4 │ P4 │ │ admin │ 2020-01-10 11:01:28 -0800 PST │ variant L4 │ │ tutorial.padring@5.TRUNK │ P4 │ HEAD LATEST │ admin │ 2020-01-10 11:01:24 -0800 PST │ tutorial release │ │ tutorial.padring@4.TRUNK │ P4 │ │ admin │ 2020-01-10 11:01:23 -0800 PST │ tutorial release │ │ tutorial.padring@3.TRUNK │ P4 │ │ admin │ 2020-01-10 11:01:23 -0800 PST │ tutorial release │ │ tutorial.padring@2.TRUNK │ P4 │ GOLD │ admin │ 2020-01-10 11:01:22 -0800 PST │ tutorial release │ │ tutorial.padring@1.TRUNK │ P4 │ GOLD │ admin │ 2020-01-10 11:01:18 -0800 PST │ tutorial release │ │ tutorial.padring@0.TRUNK │ P4 │ │ admin │ 2020-01-10 11:01:03 -0800 PST │ Initial version │ └──────────────────────────┴────┴─────────────┴────────────┴───────────────────────────────┴────────────────────┘ Found 14 matching object(s).
Query Search
The Perforce IPLM Query language can be used to search for any objects in Perforce IPLM across any field associated with the object.
IPLM Client Performance Profiling
This document describes a method to do performance profiling on file system caching and network latency.
When using the bash shell, the time command can be used to get the overall command latency, and the --profile option can be used to determine network latency.
$ time pi --profile version PiServer version : 2.22.1 PiClient version : 2.22.1 ┌───────────────────────────┬─────┬───────┬───────┬───────┬───────┐ │ METHOD │ CNT │ MIN │ AVG │ MAX │ TOTAL │ ╞═══════════════════════════╪═════╪═══════╪═══════╪═══════╪═══════╡ │ API │ 1 │ 0.025 │ 0.025 │ 0.025 │ 0.025 │ │ GET /cli/v1/system/info │ 1 │ 0.025 │ 0.025 │ 0.025 │ 0.025 │ └───────────────────────────┴─────┴───────┴───────┴───────┴───────┘ real 0m1.062s user 0m0.340s sys 0m0.360s
Here we can see that the entire command took 1062 milliseconds to execute (time command output), of which 25 milliseconds (pi --profile output) was spent doing a single API call across the network.
The remaining (1062-25) 1037 milliseconds was startup time used to locate the executable on disk (via NFS or static mounts), start the python interpreter, initiate the API call and display the results.
This startup time can vary based on things like
-
NFS caching
-
kernel caching of the executable
The network time to do the API call can vary based on things like
-
Network routing
-
Network accelerators (like https://www.riverbed.com/products/steelhead/index.html for example)
Running this profiling command several times will reveal items like file system latency and network latency. It could also be used to identify impact of various caching systems like NFS or network accelerators. When running this command several times things within a short window things like caching should be visible, and running this command over a larger window should show things like latency.
Storage
Occasionally a command may take longer than expected and it would be good to check the state of the underlying storage
SSD trim
> fstrim -v / /: 14.7 GiB (15799980032 bytes) trimmed
Disk performance check
root@mdx-ev2:~# hdparm -Tt / /: Timing cached reads: 27676 MB in 2.00 seconds = 13852.06 MB/sec Timing buffered disk reads: 174 MB in 3.01 seconds = 57.74 MB/sec