Loading a Workspace

Helix IPLM workspaces are built from Helix IPLM releases, and consist of natively managed DM data that may come from one or more DM systems. Once built the data in the workspace is accessed and managed with standard DM applications and flows. Helix IPLM stays out of the way of these flows until requested, to bring in new IPV updates, perform comparisons of the workspace data to releases, or to generate a new release from the workspace. 

It is additionally possible to capture the workspace state in a Workspace Snapshot, publishing the in flight state of the workspace back to the server without making a full release. This Snapshot data object is useful in copying and sharing a workspace state for the purposes of debugging, providing or receiving technical support, and providing patch information between IP consumers and providers. See Capturing the Workspace State with Snapshots for more information.

Helix IPLM workspace overview

Helix IPLM workspaces exist both on disk in the physical workspace and as a metadata objects stored in the Helix IPLM server. Storing workspaces on the server provides centralized traceability of design activities in the workspace. Information about the workspace, including which IPVs are currently loaded in any given workspace can be accessed directly from the server or via metadata stored and updated locally in the workspace.  

IPLM Cache

Through IPLM Cache IPs can be loaded into the workspace in either local (fully editable local copy) or refer (read only cache copy) modes. In most use models it is not necessary to have all design data available for edit in all workspaces, given that caching IPs with IPLM Cache speeds workspace loading as including an IP in the workspace only requires creating a link to the cache. Keeping IPs in refer mode additionally ensures that no accidental edits have been made which eliminates a source of error when releasing a tapeout.

More details concerning IP caching can be found in the IP Caching section.

Workspace configuration 

Each IP loaded into the workspace is loaded into its own directory in the workspace. Workspace configuration can be configured via Project Properties. The configuration options available include which directory in the workspace to load the IP (–path project property), unix group to apply to the IP directory in the workspace (--unix-group project property), whether the IP should be limited to the local workspace, the cache, or both (–mode project property), or resolution of IP Hierarchy conflicts (–resolve project property).

In addition to configuring how Helix IPLM loads workspaces via project properties, is also possible to configure accessory file creation and linking in the workspace via the piclient.conf file and its configuration options. More information on Workspace configuration can be found on the Workspace Configuration page.

Workspace Helix IPLM vs. DM Operations

Interaction with the Helix IPLM built workspace takes place on two conceptual levels, the Helix IPLM (IPLM) level and the DM (file) level. The DM level consists of file checkins, checkouts, file level updates and management using DM commands. These DM level commands may be run directly from the command line, or indirectly through an application like VersIC (Cadence, Synopsys, ADS interface) or P4V (Perforce client side GUI). In all cases, Helix IPLM doesn't interfere with DM level activities until needed.

The second level of interaction in the workspace is the Helix IPLM level. Helix IPLM manages the data at the IP level rather than the file level.

Each of these functions is described in more detail in its relevant section. 

Each IP is loaded into its own IP directory in the Helix IPLM workspace. Each IP can be managed at the DM level, and also updated, compared, and released at the Helix IPLM level. 

Perforce IPs in the Workspace

Perforce IPs are handled in a slightly different manner than some other DM systems. Helix IPLM creates and manages the Perforce client spec which is used to define what Perforce data should be placed in the workspace. In many cases all of the Perforce data in the IPs in the workspace comes from the same Perforce server, defined by the P4PORT environment variable. In this case a single p4 client is created at the top level of the workspace. This allows p4 commands to be run against all the perforce managed data in the workspace, which provides some performance and convenience gains. If however some of the p4 IPs are sourced from a second (or third or fourth) p4 server then these IPs are built with individual clients, each in their own directory. In this case their server information is captured in the HOST field on the IP definition. More information on multiple p4 server support in Helix IPLM workspaces is available at Using Multiple Perforce Servers with Helix IPLM.

Perforce (and any other DM type) can coexist with other DM type IPs in the same workspace. A key value proposition of Helix IPLM is that it can tie disparate DM systems together in one configuration management system.

Workspaces and Permissions

In order to load an IPV into a workspace the user loading the workspace must have at least Read level permission to the IPV's line.  

If a user has permission to some of the IPVs in an IP Hierarchy but not all of them, Helix IPLM will load the IP Hierarchy as a 'Partial Workspace'. Workspace updates and releases will work normally for a user in a Partial Workspace, with the IPVs that they don't have Read permission to being ignored for updates and kept at their existing release for new releases.

See Workspaces and Permissions for more details.

IP@0 releases (initial release)

The @0 release of a newly created IP doesn't have any release contents in Helix IPLM (see the Creating New IPs section) even if there is already data in the IP's DM repo path. To make a release of the IP with DM contents either make a release from a workspace (most common) or a server side release using one of the Special Release Mechanisms (this is less common). For more information on populating data in a new IP see the page on Adding Data to New IPs. When first populated into a workspace, an @0 release will either be empty or (in the case of some DM Handlers) be populated with the @HEAD data in the IP's repo path (which may also result in an empty directory if no data has been added to the IP).

Conflict resolution 

The IP Hierarchies that are loaded into workspaces may have conflicts between versions and lines of one or more IPVs in the hierarchy. These conflicts are resolved upon workspace creation, see the IP Hierarchy section for details.

Command line

Workspaces are loaded from IP releases using the pi ip load command. Workspace loads are executed from the PiCLI client.

The format of the 'pi ip load' command is

pi ip load command

> pi ip load -h
Usage: pi ip load [-h] [--args ARGS] [--local LOCAL [--local LOCAL] ... |
                  --local-all] [--view VIEW]
                  ip [workspace]

Description: Load an IP into a Workspace. An IP Workspace is the directory
containing the IP and all its resources. To work with an IP, users load it
into a Workspace.

Positional arguments:
  ip                    The IP to load into the Workspace.
  workspace             The Workspace directory. By default the directory is
                        set to the name of the IP.

Optional arguments:
  --args ARGS           Arguments passed to the hooks.
  --local LOCAL [--local LOCAL] ..., -l LOCAL [--local LOCAL] ...
                        Load the IP named LOCAL in local mode (directly into
                        the Workspace).
  --local-all           Load all IPVs in local mode.
  --view VIEW, -v VIEW  Load the Workspace using the specified View (PWM
                        only).
  -h, --help            Show this help message and exit

Specifying the Workspace Name

The 'workspace' argument in the pi ip load command is optional. If the workspace name is specified the workspace will be loaded into a directory with the specified name. If no workspace name is specified the workspace name will be of the format LIB.IP where the LIB and IP are the Helix IPLM Library and IP names of the top level IP loaded into the workspace.

No Workspace Name Specified
> pi ip load tutorial.tutorial
Loading IPV 'tutorial.tutorial@7.TRUNK' into Workspace '/tmp/workspaces/tutorial.tutorial'.
Workspace Name is Specified
> pi ip load tutorial.tutorial myworkspace
Loading IPV 'tutorial.tutorial@7.TRUNK' into Workspace '/tmp/workspaces/myworkspace'.

Setting Local/Refer Mode on Workspace Load

By default, if IPLM Cache is enabled (see the IP Caching section for details) IPVs in the IP Hierarchy loaded by the pi ip load command are loaded in refer mode. This default can be overridden with the following command line options, or by setting the --mode project property.

Command Option Description
--local LOCAL [–local LOCAL] Specifying the --local option on an IP by IP basis will load the specified IPs in local mode when the workspace is loaded. Note that if the --mode specification for an IPV requires that it always be in refer mode, the --local option will be ignored for that IPV.
--local-all Put all IPVs in the hierarchy in local mode. Any IPVs that have --mode refer set (see Workspace Configuration) won't be made local.

Setting a Workspace View

Workspace Views can be applied when the workspace is loaded. See the Using Workspace Views section for details. Workspace views can be used to omit some IP content from the workspace, or to load some workspace content at the @HEAD file revisions in the DM.

Command Option Description
--view VIEW, -v VIEW Load the workspace with the specified workspace view.

Example Workspace Load

The output of pi ip load includes status messages for IPLM Cache jobs, and a summary of the workspace.

Example Workspace Load
> pi ip load tutorial.tutorial
Loading IPV 'tutorial.tutorial@7.TRUNK' into Workspace '/tmp/workspaces/tutorial.tutorial'.
INFO:Waiting for 8 PiCache jobs ...
INFO:Waiting for 4 PiCache jobs ...
INFO:Waiting for 2 PiCache jobs ...
INFO:Waiting for 1 PiCache job ...
┌────────────────────────────┬───────────────────┬───────────┬──────────────────────────┬──────────────────────────┐
│ NAME                       │      VERSION      │    MODE   │ RELATIVE PATH            │ CONFLICTS                │
╞════════════════════════════╪═══════════════════╪═══════════╪══════════════════════════╪══════════════════════════╡
│ tutorial.tutorial          │      7.TRUNK      │   Refer   │ blocks/tutorial          │                          │
│ ARM.cortex                 │      1.TRUNK      │   Refer   │ blocks/cortex            │                          │
│ certification.ibm_rqm      │    0.RQM_6_0_5    │   Refer   │ blocks/ibm_rqm           │                          │
│ tutorial.CADenv            │  GOLD.TRUNK [@1]  │   Refer   │ blocks/CADenv            │                          │
│ tutorial.MS90G             │      1.TRUNK      │   Refer   │ blocks/MS90G             │                          │
│ tutorial.acells_tsmc18     │      1.TRUNK      │   Refer   │ blocks/acells_tsmc18     │                          │
│ tutorial.adc               │  HEAD.TRUNK [@1]  │   Refer   │ blocks/adc               │                          │
│ tutorial.aes512            │      1.TRUNK      │   Refer   │ blocks/aes512            │                          │
│ tutorial.analog_top        │  HEAD.TRUNK [@1]  │   Refer   │ blocks/analog_top        │                          │
│ tutorial.bist_sram         │      1.TRUNK      │   Refer   │ blocks/bist_sram         │                          │
│ tutorial.clk_mux           │      1.TRUNK      │   Refer   │ blocks/clk_mux           │                          │
│ tutorial.clkgen            │  HEAD.TRUNK [@1]  │   Refer   │ blocks/clkgen            │                          │
│ tutorial.cpu               │ LATEST.TRUNK [@2] │   Refer   │ blocks/cpu               │                          │
│ tutorial.dac               │  HEAD.TRUNK [@0]  │ Container │                          │                          │
│ tutorial.dbuf              │      1.TRUNK      │   Refer   │ blocks/dbuf              │                          │
│ tutorial.digital_top       │      2.TRUNK      │   Refer   │ blocks/digital_top       │                          │
│ tutorial.events_if         │      1.TRUNK      │   Refer   │ blocks/events_if         │                          │
│ tutorial.flash             │      1.TRUNK      │   Refer   │ blocks/flash             │                          │
│ tutorial.flash_if          │      1.TRUNK      │   Refer   │ blocks/flash_if          │                          │
│ tutorial.fusa              │ LATEST.TRUNK [@0] │ Container │                          │                          │
│ tutorial.gen_dig           │ LATEST.TRUNK [@2] │   Refer   │ blocks/gen_dig           │ tutorial.gen_dig@1.TRUNK │
│ tutorial.interface         │      1.TRUNK      │   Refer   │ blocks/interface         │                          │
│ tutorial.intf_ana          │  HEAD.TRUNK [@1]  │   Refer   │ blocks/intf_ana          │                          │
│ tutorial.io5v              │      1.TRUNK      │   Refer   │ blocks/io5v              │                          │
│ tutorial.io_tsmc18         │      1.TRUNK      │   Refer   │ blocks/io_tsmc18         │                          │
│ tutorial.laysc_tsmc18      │      1.TRUNK      │   Refer   │ blocks/laysc_tsmc18      │                          │
│ tutorial.padring           │      1.TRUNK      │   Refer   │ blocks/padring           │                          │
│ tutorial.proj_tech         │      1.TRUNK      │   Refer   │ blocks/proj_tech         │                          │
│ tutorial.pwr_mgmt_ana      │  HEAD.TRUNK [@1]  │   Refer   │ blocks/pwr_mgmt_ana      │                          │
│ tutorial.rx_channel        │      1.TRUNK      │   Refer   │ blocks/rx_channel        │                          │
│ tutorial.rxtx              │      1.TRUNK      │   Refer   │ blocks/rxtx              │                          │
│ tutorial.stup_ana          │  HEAD.TRUNK [@1]  │   Refer   │ blocks/stup_ana          │                          │
│ tutorial.sys_bus           │      1.TRUNK      │   Refer   │ blocks/sys_bus           │                          │
│ tutorial.t0                │      1.TRUNK      │   Refer   │ blocks/t0                │                          │
│ tutorial.t1                │      1.TRUNK      │   Refer   │ blocks/t1                │                          │
│ tutorial.timers            │      1.TRUNK      │   Refer   │ blocks/timers            │                          │
│ tutorial.tool_cert         │ LATEST.TRUNK [@0] │ Container │                          │                          │
│ tutorial.trc               │  HEAD.TRUNK [@1]  │   Refer   │ blocks/trc               │                          │
│ tutorial.tutorial_IEC61508 │      1.TRUNK      │   Refer   │ blocks/tutorial_IEC61508 │                          │
│ tutorial.tutorial_ISO26262 │      1.TRUNK      │   Refer   │ blocks/tutorial_ISO26262 │                          │
│ tutorial.verif_config      │      1.TRUNK      │   Refer   │ blocks/verif_config      │                          │
└────────────────────────────┴───────────────────┴───────────┴──────────────────────────┴──────────────────────────┘

Workspace Contents

If we look at the contents of the workspace we see that a set of cds.lib files have been generated based on the presence of Cadence data in some of the IPs loaded into the workspace. The cds_run directory contains links to .cdsinit and other files needed to configure Cadence startup. The generation of these and similar files for other applications are configured in the piclient.conf file, see the Workspace Configuration section for more details. The .p4config file contains Perforce configuration for the workspace.

Workspace Contents
> cd tutorial.tutorial
> ls -al
total 32
drwxr-xr-x 5 mdx  mdx  4096 Apr 28 19:12 .
drwxrwxrwx 5 root root 4096 Apr 28 19:14 ..
drwxr-xr-x 2 mdx  mdx  4096 Apr 28 19:11 blocks
-rw-r--r-- 1 mdx  mdx   188 Apr 28 19:12 cds.lib
drwxr-xr-x 2 mdx  mdx  4096 Apr 28 19:12 cds_run
drwxr-xr-x 2 mdx  mdx  4096 Apr 28 19:12 .methodics
-rw-r--r-- 1 mdx  mdx    61 Apr 28 19:11 .p4config
-rw-r--r-- 1 mdx  mdx   803 Apr 28 19:12 workspace_cds.lib

The top level IPV loaded into the workspace has been configured to load the incoming IPs into the blocks sub-directory. Any number of configurations is possible, based on IP and Library names, and using wildcards. The placement of IPs in the workspace is configured with the '–path' project property. See the Workspace Configuration page for more details. The IPs loaded into the workspace have been loaded in 'refer' mode, they are links off to the common IPLM Cache. To make them local use the 'pi ip local' command. Once in local mode the IPs can be modified using standard DM commands, either directly from the command line or via a higher level application such as VersIC.

Blocks Directory
> ls -al blocks/
total 8
drwxr-xr-x 2 mdx mdx 4096 Apr 28 19:11 .
drwxr-xr-x 5 mdx mdx 4096 Apr 28 19:12 ..
lrwxrwxrwx 1 mdx mdx   44 Apr 28 19:11 acells_tsmc18 -> /picache-root/tutorial/acells_tsmc18/TRUNK/1
lrwxrwxrwx 1 mdx mdx   37 Apr 28 19:11 adc -> /picache-root/tutorial/adc/TRUNK/HEAD
lrwxrwxrwx 1 mdx mdx   37 Apr 28 19:11 aes512 -> /picache-root/tutorial/aes512/TRUNK/1
lrwxrwxrwx 1 mdx mdx   44 Apr 28 19:11 analog_top -> /picache-root/tutorial/analog_top/TRUNK/HEAD
lrwxrwxrwx 1 mdx mdx   40 Apr 28 19:11 bist_sram -> /picache-root/tutorial/bist_sram/TRUNK/1
lrwxrwxrwx 1 mdx mdx   37 Apr 28 19:11 CADenv -> /picache-root/tutorial/CADenv/TRUNK/1
lrwxrwxrwx 1 mdx mdx   40 Apr 28 19:11 clkgen -> /picache-root/tutorial/clkgen/TRUNK/HEAD
lrwxrwxrwx 1 mdx mdx   38 Apr 28 19:11 clk_mux -> /picache-root/tutorial/clk_mux/TRUNK/1
lrwxrwxrwx 1 mdx mdx   32 Apr 28 19:11 cortex -> /picache-root/ARM/cortex/TRUNK/1
lrwxrwxrwx 1 mdx mdx   34 Apr 28 19:11 cpu -> /picache-root/tutorial/cpu/TRUNK/2
lrwxrwxrwx 1 mdx mdx   35 Apr 28 19:11 dbuf -> /picache-root/tutorial/dbuf/TRUNK/1
lrwxrwxrwx 1 mdx mdx   42 Apr 28 19:11 digital_top -> /picache-root/tutorial/digital_top/TRUNK/2
lrwxrwxrwx 1 mdx mdx   40 Apr 28 19:11 events_if -> /picache-root/tutorial/events_if/TRUNK/1
lrwxrwxrwx 1 mdx mdx   36 Apr 28 19:11 flash -> /picache-root/tutorial/flash/TRUNK/1
lrwxrwxrwx 1 mdx mdx   39 Apr 28 19:11 flash_if -> /picache-root/tutorial/flash_if/TRUNK/1
lrwxrwxrwx 1 mdx mdx   38 Apr 28 19:11 gen_dig -> /picache-root/tutorial/gen_dig/TRUNK/1
lrwxrwxrwx 1 mdx mdx   47 Apr 28 19:11 ibm_rqm -> /picache-root/certification/ibm_rqm/RQM_6_0_5/0
lrwxrwxrwx 1 mdx mdx   40 Apr 28 19:11 interface -> /picache-root/tutorial/interface/TRUNK/1
lrwxrwxrwx 1 mdx mdx   42 Apr 28 19:11 intf_ana -> /picache-root/tutorial/intf_ana/TRUNK/HEAD
lrwxrwxrwx 1 mdx mdx   35 Apr 28 19:11 io5v -> /picache-root/tutorial/io5v/TRUNK/1
lrwxrwxrwx 1 mdx mdx   40 Apr 28 19:11 io_tsmc18 -> /picache-root/tutorial/io_tsmc18/TRUNK/1
lrwxrwxrwx 1 mdx mdx   43 Apr 28 19:11 laysc_tsmc18 -> /picache-root/tutorial/laysc_tsmc18/TRUNK/1
lrwxrwxrwx 1 mdx mdx   36 Apr 28 19:11 MS90G -> /picache-root/tutorial/MS90G/TRUNK/1
lrwxrwxrwx 1 mdx mdx   38 Apr 28 19:11 padring -> /picache-root/tutorial/padring/TRUNK/1
lrwxrwxrwx 1 mdx mdx   40 Apr 28 19:11 proj_tech -> /picache-root/tutorial/proj_tech/TRUNK/1
lrwxrwxrwx 1 mdx mdx   46 Apr 28 19:11 pwr_mgmt_ana -> /picache-root/tutorial/pwr_mgmt_ana/TRUNK/HEAD
lrwxrwxrwx 1 mdx mdx   41 Apr 28 19:11 rx_channel -> /picache-root/tutorial/rx_channel/TRUNK/1
lrwxrwxrwx 1 mdx mdx   35 Apr 28 19:11 rxtx -> /picache-root/tutorial/rxtx/TRUNK/1
lrwxrwxrwx 1 mdx mdx   42 Apr 28 19:11 stup_ana -> /picache-root/tutorial/stup_ana/TRUNK/HEAD
lrwxrwxrwx 1 mdx mdx   38 Apr 28 19:11 sys_bus -> /picache-root/tutorial/sys_bus/TRUNK/1
lrwxrwxrwx 1 mdx mdx   33 Apr 28 19:11 t0 -> /picache-root/tutorial/t0/TRUNK/1
lrwxrwxrwx 1 mdx mdx   33 Apr 28 19:11 t1 -> /picache-root/tutorial/t1/TRUNK/1
lrwxrwxrwx 1 mdx mdx   37 Apr 28 19:11 timers -> /picache-root/tutorial/timers/TRUNK/1
lrwxrwxrwx 1 mdx mdx   37 Apr 28 19:11 trc -> /picache-root/tutorial/trc/TRUNK/HEAD
lrwxrwxrwx 1 mdx mdx   39 Apr 28 19:11 tutorial -> /picache-root/tutorial/tutorial/TRUNK/7
lrwxrwxrwx 1 mdx mdx   48 Apr 28 19:11 tutorial_IEC61508 -> /picache-root/tutorial/tutorial_IEC61508/TRUNK/1
lrwxrwxrwx 1 mdx mdx   48 Apr 28 19:11 tutorial_ISO26262 -> /picache-root/tutorial/tutorial_ISO26262/TRUNK/1
lrwxrwxrwx 1 mdx mdx   43 Apr 28 19:11 verif_config -> /picache-root/tutorial/verif_config/TRUNK/1