MongoDB authentication

Set up MongoDB with access control for when users of the database are required to identify themselves. Each step in this process must be completed before proceeding to the next step:

  1. Initial MongoDB setup - Install the mdx-backend-mongodb package and get MongoDB initially up and running without access control.

  2. Set up authentication between cluster members - Prepare a keyfile with a password that MongoDB cluster members use to authenticate to other cluster members. This is not needed for a single-node instance.

  3. Add MongoDB users - Add an admin user and an IPLM Cache user to the MongoDB admin database. Use the admin user later when accessing MongoDB from the mongo shell utility. You will configure IPLM Cache to use the IPLM Cache user to access MongoDB. Restart the MongoDB instances and ensure the cluster or single instance comes up correctly with access control fully enabled.

  4. Configure IPLM Cache to use MongoDB with access control - Install the mdx-picache package on all IPLM Cache nodes, and configure the IPLM Cache instances to use MongoDB properly.

Initial MongoDB setup

Before MongoDB can be set up with access control, it must first be brought up without it. This procedure shows how to set up a MongoDB HA cluster. For a single-node MongoDB instance, omit the cluster related steps.

  1. Refer to Perforce IPLM deployment and installation guide and Perforce IPLM package installation, and install the mdx-backend-mongodb package on the targeted machines.

    Note:  To run IPLM Cache later, ensure that the mdx-backend-redis package is also installed, and Redis is configured properly and running.

  2. Refer to Configuring IPLM Cache for High Availability, and on each machine where MongoDB is installed, edit the MongoDB configuration file /etc/mdx/mdx-backend-mongodb.conf, configuring it to bind to the machine's IP address and setting it up to work in a cluster, if appropriate.

    In this example, note the following:

    • The format of the net:bindIp setting, with an externally accessible IP address used and no spaces in the list of bound IP addresses.

    • The net:unixDomainSocket:enabled setting is false since an externally accessible IP address is used.

    • The replication section in this example shows the configuration of an HA MongoDB cluster.

    • The cacheSizeGB setting limits the amount of memory that MongoDB uses to prevent out of memory issues. Without it, it defaults to the larger of 50% of (RAM - 1 GB) or 256 MB.

    systemLog:
        destination: file
        path: "/var/log/mdx-backend-mongodb/mongod.log"
        logAppend: true
    storage:
        dbPath: "/var/log/mdx-backend-mongodb/mongodb"
        journal:
            enabled: true
        wiredTiger:
            engineConfig:
                cacheSizeGB: 4
    processManagement:
        fork: true
        pidFilePath: "/var/run/mdx-backend-mongodb/mongod.pid"
    net:
        bindIp: 10.211.55.18,127.0.0.1
        port: 27017
        unixDomainSocket:
            enabled: false
    	 pathPrefix: "/var/run/mdx-backend-mongodb"
    replication:
        replSetName: rs0
  3. For a MongoDB HA cluster, ensure each /etc/mdx/mdx-backend-mongodb.conf MongoDB configuration file has the correct net:bindIp settings. The rest of the settings can be the same. The replication:replSetName setting must be the same for all cluster members.

  4. Start the mdx-backend-mongodb service on each member of the cluster, for example:

    # service mdx-backend-mongodb start
    Starting Backend MongoDB Server ...                           done
  5. Go into the mongo shell, on one cluster member only, and initiate the cluster, for example:

    # which mongo
    /usr/share/mdx/mongodb/mongodb-linux-x86_64-3.2.20/bin/mongo
    # mongo
    MongoDB shell version: 3.2.20
    connecting to: test
    ...
    > rs.initiate({
    ... _id: "rs0",
    ... members: [
    ... {_id: 0, host: "10.211.55.18:27017"},
    ... {_id: 1, host: "10.211.55.19:27017"},
    ... {_id: 2, host: "10.211.55.20:27017"}
    ... ]
    ... })
    { "ok" : 1 }

    One member will now become the Primary instance and the other two will become Secondary instances. Periodically press ENTER until you see one of the corresponding prompts, such as:

    rs0:PRIMARY>

    In this case, the mongo shell on the other two instances will show they are secondary instances, such as:

    rs0:SECONDARY>

Set up authentication between cluster members

To add access control to MongoDB, create a keyfile so that the cluster members can internally authenticate with each other. See the MongoDB v3.2 Internal Authentication page for more information.

This section only applies to a MongoDB cluster.

  1. Create the keyfile containing the password. The password length must be between 6 and 1024 characters and may only contain characters in the base64 set. Whitespace is ignored by MongoDB, the keyfile must not have group or world permissions, and the owner of the keyfile must be the service account that the IPLM Cache worker processes run as (in /etc/mdx/picache.conf, this is given by the worker-user setting which defaults to mdxadmin). For example:

    # cd /etc/mdx
    # echo -e "MongoDBSecretKey" > mongodb_cluster_auth_file
    # chown mdxadmin:mdxadmin mongodb_cluster_auth_file
    # chmod 600 mongodb_cluster_auth_file
  2. Copy or recreate the keyfile on each cluster member.

  3. Edit the MongoDB configuration file /etc/mdx/mdx-backend-mongodb.conf on each node to add a reference to the keyfile. For example:

    security:
        keyFile: /etc/mdx/mongodb_cluster_auth_file

Set up authentication in a single-node instance

For a single-node instance of MongoDB, enable access control by editing the /etc/mdx/mdx-backend-mongodb.conf file and add the following at the bottom of the file:

security:
    authorization: enabled

Add MongoDB users

Next, add users to the admin database. See the MongoDB v3.2 Enable Auth page for more information. Below, two users are added: an admin user for later mongo shell activities and a picache user used by IPLM Cache to access MongoDB. Use whatever usernames and passwords you wish, just note them for later use.

  1. In the mongo shell on the Primary instance of the cluster, or in the mongo shell on the single-instance MongoDB node, switch to the admin database and create a MongoDB admin user, used for later operations in the mongo shell and an IPLM Cache user. For example:

    rs0:PRIMARY> use admin
    switched to db admin
    rs0:PRIMARY> db.createUser(
    ... {
    ... user: "picacheAdmin",
    ... pwd: "abc123",
    ... roles: [ { role: "userAdminAnyDatabase", db: "admin" } ]
    ... }
    ... )
    Successfully added user: {
        "user" : "picacheAdmin",
        "roles" : [
            {
    	     "role" : "userAdminAnyDatabase",
    	     "db" : "admin"
    	 }
        ]
    }
    rs0:PRIMARY> db.createUser(
    ... {
    ... user: "picache",
    ... pwd: "xyz123",
    ... roles: [ { role: "readWrite", db: "picache" } ]
    ... }
    ... )
    Successfully added user: {
        "user" : "picache",
        "roles" : [
            {
    	     "role" : "readWrite",
    	     "db" : "picache"
            }
        ]
    }
    rs0:PRIMARY> show users
    {
        "_id" : "admin.picacheAdmin",
        "user" : "picacheAdmin",
        "db" : "admin",
        "roles" : [
            {
    	     "role" : "userAdminAnyDatabase",
    	     "db" : "admin"
             }
        ]
    }
    {
        "_id" : "admin.picache",
        "user" : "picache",
        "db" : "admin",
        "roles" : [
            {
    	     "role" : "readWrite",
    	     "db" : "picache"
    	  }
        ]
    }
  2. Press Ctrl-d to exit the mongo shell.

  3. Stop the mdx-backend-mongodb service on each machine.

    # service mdx-backend-mongodb stop
    Stopping Backend MongoDB Server ...            done

    From this point on, when you use the mongo shell you must authenticate as the admin user before you can do any operations. For example:

    # mongo
    MongoDB shell version: 3.2.20
    connecting to: test
    rs0:PRIMARY> use admin
    switched to db admin
    rs0:PRIMARY> db.auth("picacheAdmin", "abc123" )
    1
    rs0:PRIMARY> show dbs
    admin    0.000GB
    local    0.001GB
    picache  0.001GB
    rs0:PRIMARY> 
  4. Start the mdx-backend-mongodb service on each machine.

    # service mdx-backend-mongodb start
    Starting Backend MongoDB Server ...          done
  5. Using the mongo shell on each machine, verify one instance shows PRIMARY and the other two instances show SECONDARY. You may see the temporary status of RECOVERING. Periodically press ENTER until you see a PRIMARY or SECONDARY prompt. If a node’s prompt does not eventually change from RECOVERING, check the node’s MongoDB configuration and restart MongoDB on that node.

Configure IPLM Cache to use MongoDB with access control

Once MongoDB has been set up with access control, IPLM Cache must be configured to access MongoDB with username and password credentials.

  1. Refer to Perforce IPLM deployment and installation guide and Perforce IPLM package installation, and install the mdx-picache-lib and mdx-picache packages on the targeted machines.

  2. Prepare an IPLM Cache MongoDB credentials file that contains the username and password of the MongoDB picache user created in Add MongoDB users above. The credentials file's permissions should be restricted, but make sure IPLM Cache's main process can access the file. For example, create a file on all the machines IPLM Cache is installed on with the name /etc/mdx/mongodb-credentials.txt that contains the following example user name and password:

    picache
    xyz123
  3. Edit the /etc/mdx/picache.conf IPLM Cache configuration file, ensuring the MongoDB related settings, among others, are set correctly. For example:

    ...
    mongodb-credentials-file = /etc/mdx/mongodb-credentials.txt
    ...
    #mongod-host = localhost:27017
    mongod-host = 10.211.55.18:27017, 10.211.55.19:27017, 10.211.55.20:27017
    ...
    mongod-rs = rs0
    ...
  4. First start the mdx-backend-mongodb and mdx-backend-redis services on the machines they are installed on, then start the picache service on the machines IPLM Cache is installed on. Ensure IPLM Cache starts up successfully on all IPLM Cache nodes.

    # service picache start
    Starting PiCacheWdog:               done
    Starting PiCacheWorker ...          done
    Starting PiCacheAPI ...             done
  5. Verify that IPLM Cache operations successfully use the MongoDB database. For example:

    $ pi ip load tutorial.verif_config
    Loading IPV 'tutorial.verif_config@2.TRUNK' into Workspace '/home/bob/workspaces/tutorial.verif_config'.
    INFO:Waiting for 1 PiCache job ...
    ┌───────────────────────┬─────────┬───────┬───────────────┐
    │ NAME                  │ VERSION │  MODE │ RELATIVE PATH │
    ╞═══════════════════════╪═════════╪═══════╪═══════════════╡
    │ tutorial.verif_config │ 2.TRUNK │ Refer │ verif_config  │
    └───────────────────────┴─────────┴───────┴───────────────┘
    $ picache-query-mongo-log.sh --exc
    
    Number of total log records: 1486
    
    No log records matched the search criteria.
    
    $ picache-ipv-admin.sh --list --all
    
    tutorial.verif_config@2.TRUNK
    
    Use -v/--verbose to get full info for each IPV
    
    Number of IPVs in cache: 1
    
    $ pi ws del tutorial.verif_config; picache-ipv-admin.sh --remove tutorial.verif_config@2.TRUNK
    Successfully deleted Workspace '/home/bob/workspaces/tutorial.verif_config'.
    
    Removed IPV tutorial.verif_config@2.TRUNK
    
    $ picache-ipv-admin.sh --list --all
    
    No IPV documents found
    
    Use -v/--verbose to get full info for each IPV
    
    Number of IPVs in cache: 0