Important P4 Server Configuration Variables

These variables are set with the 'p4 configure set' interface, example:

p4 configure set myserver#net.parallel.max=21

To show all the configurables currently set use:

> p4 configure show allservers
any: net.parallel.max = 21
any: net.parallel.min = 1000
any: net.parallel.minsize = 500M
any: net.parallel.submit.min = 1000
any: net.parallel.submit.threads = 21
any: net.parallel.threads = 21
tau-deploy-test-c6: P4LOG = /usr/share/mdx/repos/p4/logs/tau-deploy-test-c6.log
tau-deploy-test-c6: P4TICKETS = /usr/share/mdx/repos/p4/.p4tickets
tau-deploy-test-c6: journalPrefix = /usr/share/mdx/repos/backup/p4d_backup_tau-deploy-test-c6
tau-deploy-test-c6: lbr.bufsize = 64k
tau-deploy-test-c6: net.tcpsize = 512k
tau-deploy-test-c6: server.depot.root = /usr/share/mdx/repos/p4/depots

Client/Server Performance Variables

net.parallel.max

Parallel File Transfer between Client and Server

Setting net.parallel.max to a value greater than one (max=100) will enable that number of parallel threads during the sync/submit process. If you do not set the additional net.parallel.submit.* configurables you would need to use the --parallel option to get parallel submits, if you do set these additional configurables, the --parallel option is not needed.

To use net.parallel.max the p4 version must be 2015.1 or later.

Typical value = 21

net.parallel.min

Minimum Files to initiate a Parallel File Transfer

Specifies the minimum number of files in a parallel sync. A sync that is too small will not initiate parallel file transfers.

Typical value = 1000

net.parallel.minsize

Minimum number of bytes to initiate a Parallel File Transfer

Specifies the minimum number of bytes in a parallel sync. A sync that is too small will not initiate parallel file transfers.

Typical value = 500M

net.parallel.submit.min

Minimum number of files to be sent in a Parallel Submit

Specifies the minimum number of files to send in a parallel submit. A sync that is too small will not initiate parallel file transfers.

Typical value = 1000

net.parallel.submit.threads

Number of threads to be used for Submitting files in Parallel

Specifies the number of threads to use for sending files in parallel.

Typical value = 21

net.parallel.threads

Number of network connections to be used to sync files in parallel

Specifies the number of network connections to use for synching files in parallel.

Typical value = 21

lbr.autocompress

Enabling this configurable specifies the storage method as compressed text

Enabling this configurable specifies the storage method as compressed text (ctext) rather than RCS format text. The user still sees the file type as textDoing so provides a performance boost in distributed environments, or where archive files are shared between servers. It’s a good idea to set this variable when using a commit/edge configuration or when sharing archive files between servers. Files using the RCS storage format undergo multiple file format conversions during submit. Use text+C or text+F storage formats instead to avoid this penalty, or consider the use of the lbr.autocompress configurable which makes it so files of type text are stored as individual gzipped archives (text+C) and not RCS text deltas for all revisions in a single archive file. It will also enable easier archiving or storage-deduplication as there are individual files for each revision instead of a single file.

Master/Replica Performance Variables

rpl.compress

Enable compression on Master/Replica communication

rpl.compress controls whether and what data is compressed in communication between the Master and Replica (forwarding replica, edge, etc) servers.  Can result in significant performance improvements, especially when the servers are separated by significant geographic distances

Typical value = 2

rpm.compress values
0: No data stream compression
1: Data streams used for archive transfer to the replica (p4 pull -u) are compressed.
2: Data streams used by p4 pull -u and p4 pull are compressed.
3: All data streams (p4 pull -u, p4 pull, and data streams for commands forwarded to the master or commit server) are compressed.
4: Compress only the journal pull and journal copy connections between the replica and the master.

Server Configuration Variables

client.readonly.dir

Disallow attempted checkout of refer data, prevent event db fragmentation due to automated builds

A known P4 Server issue means that an attempt to p4 edit a file in a refer mode IPV will fail (in that the file is not made editable), but will leave behind a lock on the file. To avoid this you can set this server configurable to support readonly clients, which will properly disallow the p4 edit attempt. If the cache service account is also a P4 administrator, the cache server will query p4 for this variable and use readonly clients automatically.

Build automation scripts can over time fragment the db.have table through frequent creation and destruction of clients.  This variable sets the directory for read-only client specifications, by setting the Type field of the automated build client to 'readonly' it will get its own personal db.have table and not degrade performance for other clients.

P4LOG

Location of P4 output log

Full or relative path to place P4LOG file

Typical value = <path to P4ROOT>/log/server.log

P4TICKETS

Location of P4 tickets file

Full or relative path to place P4TICKETS file (note: must include the file, can't just be the directory)

Typical value = <path to P4ROOT>/tickets/.p4tickets

journalPrefix

Prefix of the journal file

Path plus file prefix of journal file.  Note this should be placed on a different physical disk from the P4 database for disaster recovery.  Placing the journal on a different physical disk than the P4 database and the managed files also will improve overall system performance.

Typical value = /<journalpath>/p4d_journal_<servername>_

server.depot.root

Filesystem location of managed depot files

Location of the managed depot files.  Placing the depot files on a different physical disk than the P4 database and the journal file improves overall system performance.

Typical value = /<path to depot root>