Managing DirX Identity Servers

This chapter explains how to manage:

  • The Message Broker

  • The Java-based Identity Server (IdS-J)

  • The C++-based Identity Server (IdS-C)

It also provides information on:

  • Distributed deployments and scalability

  • High availability and recovery

  • Diagnostics

  • How to manage daylight savings time

  • Connector frameworks

For information on high availability for both Java- and Tcl-based workflows, please see the use case document High Availability.

Managing the Message Broker

DirX Identity uses a JMS Message broker for most of its internal communication:

  • The DirX Identity services (for example, the privilege resolution service), Web Center, Java-based real-time workflows and metacp to send real-time events for immediate processing by Java-based workflows.

  • The Windows Password Listener and Web applications, to send password change events to be processed by password event managers and password synchronization workflows.

  • The C++-based Identity Servers (IdS-C servers) in a distributed environment, to exchange the command messages that start and control Tcl-based workflows. The same interface is used by the runwf tool.

  • The C++-based DirX Identity Status Tracker component, to receive messages from various components and create status entries in the status area.

The Message Broker is one or more Apache ActiveMQ instances.The ActiveMQ instances must be installed via the DirX Identity installer to manage locations and names, but the target servers for these instances are not predefined, so that the messaging system is flexible and can scale easily.

This section describes how to plan for and set up the message broker, including:

  • Deployment options (single broker, high availability)

  • Installation, configuration, start/stop, JMX access and logging

  • Basic concepts of how DirX Identity uses the message broker

  • Messages and message sizes used in DirX Identity

Planning the Message Broker Deployment

You can deploy the Message Broker in different ways, depending on your load-balancing and high-availability requirements. Installation/configuration wizard deployment options include:

  • One Message Broker instance installed on the same system as an IdS-J or IdS-C server

  • One Message Broker instance installed on an external server

  • Multiple Message Broker instances spread over IdS-J / IdS-C and external servers sharing the same database for persistent messages (database on a shared drive)

Apache ActiveMQ offers various options on how to operate multiple instances. The DirX Identity configuration uses the following implementation:

  • Only one Message Broker instance is accessible for clients. This broker has exclusive access to the database (DB lock) for persistent messages.

  • All other instances are up and running, but can’t access the database. If the exclusive broker is unavailable, the next instance takes over the database access, captures the persistent messages and starts up the connectors to be accessible for the clients.

  • The switch from one broker instance to another instance is transparent for the clients. DirX Identity’s Message Broker name and instance management service handles the discovery of the Message Broker instance.

  • To ensure the fail-over capability, the database for persistent messages must be on a shared drive to which all broker instances have access. For a single-broker installation, you can install the database locally.

About the Message Broker Components

Message broker components include:

  • The Message Broker wrapper container

  • The database for persistent messages

  • The start/stop service/daemon

Each Message Broker runs its own wrapper container which is deployable on Windows and/or UNIX systems.

Each Message Broker needs a database for persistent messages. The built-in database is "KahaDB". The database is installed by the DirX Identity installation. The location of the database depends on the message broker deployment in use.

Each Message Broker is represented by a service on Windows or a daemon process on UNIX. Service names are:

DirX Identity Message Broker number (Windows)

ids-mbrk-number (UNIX)

The number is assigned during Message Broker configuration. Only one Message Broker per server is supported. In a high availability scenario, you may have multiple Message Brokers across a distributed DirX Identity installation.

Starting the Message Broker

The services that make up the Message Broker normally start automatically when the system is booted. The service starts independently of the IdS-J / IdS-C services. Message broker services include:

  • Service name on Windows: DirX Identity Message Broker number

  • Process name on Windows: wrapper.exe

  • Process name on UNIX: wrapper

  • Controlled processes on Windows: java.exe

  • Controlled processes on UNIX: java

The shell script install_path/etc/dmmbrk-number starts the service on Linux when the system runs in multi-user mode.

Configuring the Message Broker

The DirX Identity-supported configuration options for the Message Broker are:

  • Target server for a Message Broker instance

  • Location of the database for persistent messages

  • Optional transport options

  • Optional fail over options in case of a High Availability deployment. They are to be configured at the parent entry of the message broker configuration entries. For details see the Active MQ documentation: http://activemq.apache.org/failover-transport-reference.html.

Monitoring the Message Broker

Use the DirX Identity Server Admin to monitor a Message Broker. Server Admin provides an overview of installed DirX Identity servers and components including the Message Broker instance(s). The instance status and a link to the Web console are provided in this view.

ActiveMQ provides a Web Console, which allows you to view the number of messages in the queues and even the messages themselves. The port can be configured as admin port in the associated system service of the broker entry. For more details see the ActiveMQ web page (http://activemq.apache.org/web-console.html).

To call the Web Console, use the following URL:

or in case of SSL configuration:

https//*your installation host:8161/admin*

Port number 8161 is the default. In the ActiveMQ Message Broker configuration step, you can set a different port number.

For more information, see the ActiveMQ web page (http://activemq.apache.org/web-console.html).

Message Broker Logging

When the Message Broker starts for the first time, default log levels are applied. They ensure that error and warning logs of all components are written. The log files are stored in the data folder of the local broker’s home directory (install_path/messagebroker). Log file names start with wrapper (for the service handler) or with activemq (for the broker itself).

Message Broker Instance Naming

Using multiple Message Brokers requires an easily-understood naming scheme for service names, LDAP entry configuration names and file folder names.

Message Broker Object Naming

The Message Broker objects in the Connectivity view group (Connectivity → Messaging Services) follow this naming scheme:

Message Broker number

where number is the number assigned to the Message Broker. The numbers are assigned dynamically; the first Message Broker to be configured is assigned the number 1.

Example for a server object:

Message Broker 1

Service Naming

The services on Windows use the following naming scheme:

DirX Identity Message Broker number

Example for a server:

DirX Identity Message Broker 1

File Folder Naming

The file folder in the installation area is install_path*/messagebroker* because only one Message Broker per local installation is supported.

JMX Access to the Message Broker

By default, JMX access to the Message Broker needs password authentication. Authentication is performed by passing the user credentials to an LdapLoginModule which tries to bind to the Connectivity LDAP server with these credentials. When the bind is successful, the JMX access is successfully authenticated.

Per configuration, the LDAP user is cn=DomainAdmin,cn=domain,dxmC=Users,dxmC=DirXmetahub.

On the JMX client-side, you only need to give the “DomainAdmin” as the username and the appropriate password. This is configured in the file jmxldap.cfg in the Message Broker’s conf folder.

If the SSL flag is activated in the system-wide configuration, the JMX access is also secured by SSL. In this case, a non-SSL access is not possible.

If SSL is configured for the Connectivity store, the authentication process to the LDAP server is also performed using an SSL connection. In this case, you need to make sure that the LDAP server’s root CA chain certificates are in the cacerts file of the Identity Java environment.

Understanding the Java Messaging Service

DirX Identity uses both message paradigms supported by Java Messaging Service (JMS): point-to-point (P2P) and publish / subscribe (Pub/Sub).

In P2P messaging, a producer sends a message to a queue. Only one or several consumers can read them from this queue. The broker passes a message to only one of them. In Pub/Sub a producer sends a message with a topic. One, several or even no consumers can subscribe to that topic (then they are called "subscriber"). Each of them receives its own copy of the message. When no consumer has subscribed to the topic, the Message Broker simply deletes it.

Both P2P and Pub/Sub allow message producers and consumers to remain independent of one another. No producer needs to know the consumer(s) of its messages, and a consumer does not need to know the producer(s). You can have multiple producers and multiple consumers. Consumers process the message when they have time for it. The producer must not wait until the consumer is finished; that is, messages are processed asynchronously.

Messages can be persistent or transient. When a message is declared persistent, the consumer(s) need not be online when the producer sends it. For Pub/Sub, as soon as the consumer has subscribed to a topic, it receives all messages sent after that moment even if it stops and starts again later. For P2P, persistent messages are stored in the queue until some consumer reads and acknowledges them. Transient messages are lost when the Message Broker stops.

Using Messages in DirX Identity

Most of the DirX Identity messages are persistent, so they are not lost when a Message Broker or a consumer is down. Only very few message types are marked as transient: they are mainly to distribute configuration update notifications or certification information. As the servers read their configuration on start-up and the Windows Password listener requires configuration and certification updates also on each start-up, no information is lost.

Some of the DirX Identity components work in P2P, others work in Pub/Sub mode.

All the messages around real-time workflows, especially password changes, are P2P. Only one consumer is intended to process them.

Pub/Sub messages are used for:

  • Controlling and monitoring Tcl-based workflows by the C-based Identity Server. They are produced by the components of the C-based Server itself (scheduler, workflow engine, agent controller) and the Identity Manager. The meta controller metacp can also produce Pub/Sub messages, but they are no longer used for the standard workflows.

  • Issuing update notifications, mainly of configuration and of certificates. The main consumer is the Windows Password Listener in order to get up-to-date information on available Message Brokers and on current certificates.

Queues of the Java-based Server

The message queues described here are consumed by the Java-based Server adaptors to process real-time events. Queue names are case-sensitive and handled in lowercase internally.

Note that the domain prefix is optional. It depends on the flag Include domain into topic of the General tab of the domain object.

domain.dxm.event.pwd.changed:

A message in this queue indicates the password change for a user.

domain.dxm.event.svctsaccount.pwd:

A message in this queue indicates a password change for an account or that the password of an account has expired.

domain.dxm.setpasswordrequest:

A message in this queue starts a password provisioning workflow to set the password of an account in a connected system.

domain.dxm.setpasswordrequest._default:

This queue contains those messages sent to domain.dxm.setpasswordrequest that can be processed in any Java-based Server.

domain.dxm.request.provisiontots:

A message in this queue triggers the synchronization of account, group and membership changes from target systems to connected systems.

domain.dxm.request.provisiontots._default:

This queue contains those messages sent to domain.dxm.request.provisiontots that can be processed in any Java-based Server.

domain.dxm.event.ebr:

A message in this queue indicates the change of a user, business object, account or other domain entry and is processed by an Event-based Maintenance workflow.

domain.dxm.request.workflow.ebr:

A message in this queue starts a maintenance workflow by its DN; for example, the certification campaign controller.

domain.dxm.request.importtoidentity:

A message in this queue indicates a change in an entry in a remote system (for example, a user in an Active Directory) that needs to be imported into the DirX Identity domain.

domain.dxm.request.importtoidentity._default:

This queue contains those messages sent to domain.dxm.request.importtoidentity that can be processed in any Java-based Server.

domain.dxm.request.workflow.provisioning:

A message in this queue is used to start a real-time workflow by name (used by Identity Manager and Scheduler).

domain.dxm.request.workflow.provisioning._default:

This queue contains those messages sent to domain.dxm.request.workflow.provisioning that can be processed in any Java-based Server.

domain.dxm.request.workflowengine:

A message in this queue is used to request the request workflow engine to update the workflow.

domain.dxm.request.activitytask:

A message in this queue is used to start an activity in a request workflow.

domain.dxm.notify.mail:

A message in this queue is used to send an email.

domain.dxm.notify.sms:

A message in this queue is used to send an SMS.

domain.dxm.request.user.resolve:

A message in this queue identifies a user. It will be resolved by the Resolution Adapter.

The following queues are created on demand when the Provisioning workflows for a connected system should run only on dedicated servers:

domain.dxm.request.provisiontots.target system identifier:

This queue contains those messages sent to domain.dxm.request.provisiontots that refer to entries of the selected target system.

domain.dxm.request.workflow.provisioning.target system identifier:

This queue contains those messages sent to domain.dxm.request.workflow.provisioning that refer to entries of the selected target system.

domain.dxm.setpasswordrequest.target system identifier:

This queue contains those messages sent to domain.dxm.*setpasswordrequest that refer to entries of the selected target system.

domain.dxm.request.importtoidentity.target system identifier:

This queue contains those messages sent to domain.dxm.request.importtoidentity that refer to entries of the selected connected system.

For information on the target system identifier and when these queues are created, see the section "Distributing Deployments and Scalability" → "Separating Traffic for Selected Connected Systems".

Topics of the Java-based Server

Messages to the following topics are consumed by one or more Java-based Servers. Each Java-based Server automatically subscribes to the following one:

dxm.event.configuration.changed:

The DirX Identity Manager publishes a message to this topic when you request to "Load IdS-J Configuration". It triggers a reload of all workflow and schedule definitions. If the domain is not specified in the message body, all servers load their workflow and schedule definitions.

For the following topic, exactly one Java-based Server per domain is intended to subscribe. Select the server by moving the corresponding adaptor to the desired server. You can do this with Identity Manager and right-click on "Manage IdS-J Configuration" or use Server Admin.

domain.dxm.request.configuration:

Windows Password Listener (WPL) publishes a message with such a topic in order to obtain information about all available messages services and to obtain a new certificate. The list of available message services allows the WPL to switch to another messaging service in case of failure.

If still older Windows Password Listeners (version older than V8.3) are running in the customer environment, the following is relevant. For the following topic, exactly one Java-based Server across all domains is intended to subscribe:

dxm.event.certificate:

Events of this type allow the older Windows Password Listeners to obtain new certificates. On start-up, the WPL publishes a message with the topic dxm.event.certificate.request to request the current certificate. The Java-based Server returns the requested certificate in a message with the topic dxm.event.certificate.changed. WPL subscribes to that topic and thus receives the certificate. If the certificate is changed, the Java-based Server sends the same message to the topic dxm.event.certificate.changed so that all Password Listeners are informed immediately.

Topics of the C++-based Server

The following message topics are used by the C++-based Server for internal and external communication:

dxm.command.machine_name

This event type represents commands for the workflow engine (create or start workflow instance; create or start activity instance).

dxm.fileservice.machine_name

Events of this type allow transferring files via the JMS messaging service.

dxm.statustracker

Various components of the C++-based Server create status information with this event type. The status tracker processes this information and creates the according status entries in the Monitor View.

Topics of the Password Listener

The following message topic is used by the Windows Password Listener communication:

domain.dxm.response.configuration:

Windows Password Listener (WPL) receives messages with such a topic in order to obtain information about all available messages services and to obtain a new certificate.

About Message Sizes

The size of the sent messages varies for the different kind of messages.

Password Messages

The password messages are about 1250 bytes each. The reason is that the encrypted password is converted to base64 format.

Real-time Events

The real-time event messages are about 700 bytes each. Here is an example (without the message frame):

<spml:modifyRequest xmlns:dsml="urn:oasis:names:tc:DSML:2:0:core"
xmlns:spml="urn:oasis:names:tc:SPML:1:0"
xmlns:event="urn:siemens:dxm:EVENT:1:0"
xmlns:order="urn:siemens:dxm:ORDER:1:0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
requestID="svc.modify.dxm.request.provisionToTS.LDAP.cluster= 'localhost'.
resource='cn=My-Company'.uid-8b19a143--29c9507c-1132f75a2b8--7fff">
<spml:identifier type="urn:oasis:names:tc:SPML:1:0#DN">
<spml:id>cn=Alexander Gerber 5217,cn=Accounts,cn=Extranet
  Portal,cn=TargetSystems,cn=My-Company</spml:id>
</spml:identifier>
<spml:modifications/>
</spml:modifyRequest>

The messages contain only the identifier of the object. The workflow reads the rest of the information from the LDAP directory.

Command and Status Messages

For a Tcl-based workflow with two activities started from the DirX Identity Manager, the size of the messages in bytes is approximately:

Message Type Number of Bytes

create

640

create acknowledge

540

execute

650

execute acknowledge

540

destroy

370

Sum

2740

+ 8 status messages
(800 through 1050 bytes per status message)

8400

Total

11140

If the workflow is started from the scheduler in a non-distributes environment, only status messages are sent via the messaging service. All other messages (command messages) are sent via an internal queue mechanism. In this case, the total number of bytes for a two-step batch workflow is about 7400 bytes.

When using compression mode, the total number of bytes (and thus the system load) can be significantly reduced:

Compression Mode Number of Messages via Manager via Scheduler

None

8

10150

7400

Compressed

5

7375

4625

Minimized if OK

1

3675

925

Suppressed if OK

0

2750

0

"via Scheduler" means that the batch workflow runs in a non-distributed environment.
File Transfer Messages

The overhead for a file transfer message is 660 bytes plus the filename length (if the file name is 20 bytes, the overhead is 680 bytes).

There is a message length parameter in the messaging service object which is 1 MB by default. You can set this value from 32 K to 4 MB (other values outside of these boundaries are set automatically to these boundaries).

If a file is 1.5 MB and the file name is 20 bytes, then the message is divided into a block with 1 MB - 680 bytes and another one with 1.5 MB - (1 MB - 680 bytes). Plus overhead the first message is exactly 1 MB, the second one is 0.5 MB + 1360 bytes.

So the general formula is:

limit = (configured value)
overhead = 660 + nameLenght
blocksize = limit - overhead

The last block can be smaller.

Messaging Subscriptions

JMS Topic subscribers must use a unique name. This enables the Message Broker to distinguish different subscribers when they re-connect. DirX Identity components use the following naming scheme.

Java-based Server
  • adaptorname

where

adaptorname

is the name of the adaptor that uses this subscription.

Examples:

AdminRequestHandler
ConfigurationHandler
CertificateHandler

C++-based Server
  • hostname.jms.dxmmsssvr_dxm.component.servername

where

hostname

is the hostname where the server is running.

component

is the component that uses this subscription:

command

used to start Tcl-based workflows.

statustracker

the status tracker receives status messages that are written to the LDAP directory.

fileservice

used by the file service to exchange data files between machines.

servername

the name that is defined in the dxmmsssvr.ini file (attribute dnServerName).

For the Status Tracker, it is just jms.dxmmsssvr_dxm.component

Example:

myhost.jms.dxmmsssvr_dxm.command.myhost

Meta Controller

The meta controller allows sending JMS messages with this naming scheme:

  • hostname.jms.metacp.hostnamedxm._component.identifier

where

identifier

a unique identifier.

For a description of the other fields, see the description of the C++-based Server.

Example:

myhost.jms.metacp.myhost_dxm.command.c0a860862gui

Manager

The Manager creates for the Process Table feature only non-durable subscriptions that use this naming scheme:

  • gui._identifier1.username_identifier2

where

identifier1

is a unique identifier of the manager instance.

username

is the username with which the manager was started.

identifier2

is unique identifier for the subscriber.

Example:

gui.c0a8e680.metatest_a81b2bc71aa9cac5_-1a203253_12ec46696c8_-7fe0

Managing the Java-based Server

This section describes the Java-based Identity Server (IdS-J), including information about:

  • Java-based server components

  • Server processes and how to start and configure them

  • Recovery

  • Auditing

  • Statistics

  • Logging

  • Naming schemes

  • Resource families

  • JMX access

Server Components

The Java-based Identity Server (IdS-J) comprises a complete infrastructure to run event-driven and scheduled synchronization and request workflows. The following figure shows the Java-based Server components.

DirX Identity Java-based Identity Server Components
Figure 1. DirX Identity Java-based Identity Server Components

The next sections provide an overview of the Administration, Adaptors, Workflow Engine, Connector and Handler components.

Administration Components

You can use Web applications or JMX clients via JMX Beans to administer the Java-based Server:

  • Web Admin - a specialized administrative Web application for monitoring and controlling the Java-based Server in which it is embedded.

  • Server Admin - a specialized administrative Web application for monitoring and controlling several Java- and C++-based Servers. This application is also running in the embedded Tomcat of each Java-based Server and is only enabled if High Availability was activated.

  • Supervisor - a servlet running in the embedded Tomcat Web Container that can be started if high availability is activated. It is responsible for monitoring another IdS-J and optionally all C-based Servers. If one of these servers crashes, the supervisor moves its functionality - JMS adaptors, Request Workflow Support, Java Scheduler and Status Tracker - to its local server or another C-based Server (status tracker). It also requests the backup adaptor to recover the backed-up messages.

  • JMX-based administration - any custom JMX client can be used as an alternative to the Web applications for administering or controlling the server; for example, Oracle’s JConsole.

The Configuration Manager is an internal component of each Java-based Server that is responsible for loading all necessary configuration information during server startup. It can also perform an update after an explicit request from administrative interfaces.

Adapters

JMS adaptors in a Java-based Server read specific events from JMS message queues or subscribe to message topics. A JMS adaptor in a Java-based Server is available for each queue and for each topic. In general, adaptors consuming messages from JMS queues can be started in each Java-based Server and thus support load balancing and scalability. Some of the JMS adaptors subscribing to message topics are automatically activated in all Java-based Servers; some others should only be activated in one server. For more details, see "Managing the Messaging Broker".

Target system-specific adaptors process Provisioning workflows that are sent to queues specific for a target system or connected system (for example, for importing entries to the Identity Store) so that slow target systems or those with lot of traffic do not slow down the provisioning of other systems. Dispatchers for the corresponding queues distribute the messages either to the default queue or the appropriate target system specific queue. For more details, see “Queues of the Java-based Server” in "Managing the Messaging Broker".

For supporting high availability, a backup adaptor can be started. It is responsible for backing up all messages of all JMS adaptors (except target system-specific adaptors) of its monitored server. In case of fail-over, it can be instructed to send the stored messages to its Message Broker.

The Dead Letter adaptor receives messages that couldn’t be processed successfully in the local server and stores them to a local embedded database. Web Admin allows an administrator to either re-process all or a subset of these messages or to delete them.

For supporting request (approval) workflows, the Request Workflow Web Services must be activated in one Java-based Server. They are hosted in the embedded Tomcat Web Container.

The Resolution Adapter is responsible for resolving a user’s privileges to access rights in connected systems. Whenever a client application adds or removes a privilege assignment or changes an attribute that might affect the user’s access rights, it sends a message, and the Resolution Adapter calculates the groups and accounts of that user. The Resolution Adapter is started on every Java-based Server. The number of listeners per server for the “resolution queue” is configured in the central configuration entry of the domain; the default is 2.

Request Workflows

The Request Workflow Web Service is deployed with every Java-based Server and supports request (approval) workflows. It allows for creating a new request workflow, updating its state, especially performing approval, and suspending and resuming a workflow. It is hosted in the embedded Tomcat Web Container.

A special job called Request Workflow Timeout Check (previously named Full Check) regularly checks timeouts of request workflows and their activities. If it detects a timeout, it sends a request so that the workflow engine updates the workflow state and, for example, terminates the activity or workflow. This timeout check must run on exactly one server per domain. It is configured in the Connectivity View group of Identity Manager by navigating to a Java-based Server and then selecting Manage IdS-J Configuration from the context menu.

Workflow Engine and Connectors

The workflow engine controls provisioning real-time, maintenance as well as request workflows for approval. Workflows consist of activities; the workflow engine starts these activities and controls their maximum lifetime. In case of timeout, the workflow engine is responsible for retrying activities after temporary errors, for escalation handling and for properly setting the operational status of both activities and workflows.

Workflow activities of provisioning and maintenance workflows are realized by components built on the connector framework. In addition to the connectors that implement the interfaces to external systems, the important components of the framework are:

  • The scheduler, which allows you to schedule workflows for a domain defined by schedules. In this case, the Java-based workflows are not run as real-time workflows triggered by events. They retrieve the necessary data from search definitions. Note that exactly one scheduler per Identity domain must be active. Configure it in the Connectivity View group of Identity Manager by navigating to a Java-based Server and then selecting Manage IdS-J Configuration from the context menu.

  • The controller (called the join engine in Provisioning workflows), which is the central component that controls the behavior of a job: it reads configuration, initiates the components and calls the connectors.

  • The DirX Identity connectors, which handle search and update operations with the external connected systems or internal event channels to other activities, audit or other adaptors. These ready-to-use connectors map internal SPML requests and responses to connected system API calls.

Activities of request workflows can be realized based on the connector framework, but are most often independent of it and can even be completely proprietary. They can either be automatic or people activities. The workflow engine is responsible for setting the appropriate states for a people activity and start notification jobs when e-mail is to be sent on start or end of an activity.

Handlers

Java-based Server handlers provide common functionality to all Java-based Server components. Handlers are available for:

  • Logging, to capture log entries and write them to configurable log files. Several log handlers can be set up with individual log levels and output destinations.

  • Auditing, to receive audit entries via the audit channel and write them to a destination. The default audit handler writes to files. A JMS audit handler sends audit messages to the DirX Audit Message Broker. See "Auditing" below on how to configure them and install the JMS audit handler when necessary.

  • Statistics, to store the workflow statistics in the Connectivity Configuration (database).

Server Processes

With the DirX Identity Configuration (Wizard), you can set up one or more Java-based Identity Servers (IdS-J) per system. Each server runs as a system process starting threads as needed; for example, to process event-based workflows.

You can run one or more Java-based Servers on the same host for the same Identity domain or for different ones.

Starting the Processes

On Windows, the Java-based Server processes normally start automatically when you boot your system. They start independently of each other.

The process name for IdS-J on Windows is ids-j.exe.

On Linux, the servers start automatically if you have followed the instructions in the DirX Identity Installation Guide. In both cases, the process name is java.

On start-up, the Java-based Server attempts several times to connect to the directory server that holds the Connectivity configuration. If unsuccessful, it does not proceed; you must re-start the IdS-J Server after the directory server is accessible.

Configuring the Processes

The Java-based Server is controlled by the following initialization (*.ini) and password files:

install_path/ids-j-domain-Sn/bin/idmsvc.ini (initialization file for Windows)
install_path/ids-j-domain-Sn/bin/runServer.sh (initialization file and start script on Linux)
install_path/ids-j-domain-Sn/private/password.properties (password file)

Java-based Server INI File Parameters

The Java-based Server (IdS-J) initialization file install_path/ids-j-domain-Sn/bin/idmsvc.ini contains two sections: Section [Settings] and Section [vmargs]. This topic describes the parameters contained in these sections.

Section [Settings]

This section specifies the following general parameters for the Java-based Server:

  • service - the name of the service.

  • displayname - the display name of the service.

  • description - the description of the service.

  • vm - path to the Java virtual machine.

  • mainclass - the main class to start with.

  • workingdir - the working directory (default: .)

  • autostart - whether or not the service starts automatically (default: TRUE):
    FALSE - manual startup
    TRUE - automatic startup

  • timeout - the timeout value in seconds after which the Service Control Manager assumes a serious error and stops the corresponding service (default 240; relevant for Windows only).

  • dxi.java.home.bin - the path to the bin folder of the JRE (Windows only).

Section [vmargs]

This section specifies the Java virtual machine arguments for the Java-based Server. Only the parameters of interest to the administrator are noted here; changes here can harm the Java-based Server. (Note that not all arguments are described here, which means that the numbers given on the left side of the list below may vary in your installation):

  • 0=-Xmx2G - the amount of memory used.
    Change this parameter if you need more heap space.

  • 4=-XX:+HeapDumpOnOutOfMemoryError - dumps a heap file on OutOfMemory. Delete this line if you do not want heap files to be written.

  • 16=-Dcom.sun.management.jmxremote.port=40005
    17=-Dcom.sun.management.jmxremote.rmi.port=40006- JMX ports. Note that JMX uses two ports. In the configuration wizard, you define only the first port number. The second is then the first number plus one.
    For JMX access, several other defines are set. Of interest is probably just the definition of supported TLS protocols (see the line with Dcom.sun.management.jmxremote.ssl.enabled.protocols)

  • 25=-Djava.security.auth.login.config=*path_to_JavaServer/bin/jmxldap.cfg* - LDAP authentication is enabled by default for JMX access.

  • 40=-DIDM_LOGFOLDER=*path-to-log-folder - the path to the folder for all the log files of the IdS-J (the default is *../logs). Note that the folder must exist.

If SSL is globally configured, then keystore and keystore password parameters are activated.

The INI file is used on Windows only.

Java-based Server Startup Script on Linux

On Linux, the Java-based Server is started via the runServer.sh script. This file is also the configuration file for the process. It contains the same parameters as the [vmargs] section (see the section "Java-based Server INI File Parameters" for details).

Java-based Server Password File Parameters

The Java-based Server (IdS-J) reads its passwords from the files:

install_path/ssl/password.properties

install_path/ids-j-domain-Sn/private/password.properties

These files contain all passwords and PINs necessary for correct operation. The first file contains the password and PINs necessary for the entire DirX Identity installation, while the second file contains the domain-specific passwords and PINs.

During startup, all DirX Identity servers require reading the relevant configuration information from the Identity Store. For authentication, passwords and PINs must be present in the server configuration files. The servers can read passwords or PINs in clear text or in encrypted format.

If you enter a password or PIN in clear text, the server reads it during the next startup, encrypts it and writes it to the configuration file. From now on, the password and PIN information is no longer readable. If you are in doubt that the right password or PIN is set or if you need to set a new password or PIN, simply replace the encrypted value with the clear text value. During the next server startup, the password or PIN value is encrypted again.

In the install_path/ssl/password.properties file, the following parameters are available:

  • pin - the PIN for the current private key for decryption of attributes (the default is 1234). Is required if encryption mode is enabled.

  • previousPin (optional) - the PIN for the previous private key for decryption of attributes. This allows smooth transition during key exchange / upgrade. The server is able to handle both old encrypted values (encrypted with the previous key) and new encrypted values (encrypted with the current key).

  • keystore (optional) - the password for the SSL key store.

  • truststore (optional) - the password for the SSL trust store.

In the install_path/ids-j-domain-Sn/private/password.properties file, the following parameters are available:

  • domain - the password for the user account which is used to access the configuration information in the LDAP directory. The default password is dirx.

    The default user entry for the Connectivity domain is cn=DomainAdmin,cn=*domain,dxmC=Users,dxmC=DirXmetahub*. You can change it in the file:

    install_path/ids-j-domain-Sn/bin/bindcredentials.xml

    The default user entry for the Provisioning domain is *cn=DomainAdmin,cn=*domain. You can change it in the file:

    install_path/ids-j-domain-Sn/bindprofiles/private/domain.xml
  • signaturePin (optional) - the PIN that is necessary for system client signature. The related certificate must be present in the Connectivity configuration at the DomainAdmin account under the Users tree (only visible in the Data View).

To leave a password empty, comment out the line with the hash tag (#) character. For example:
#previousPin=

Starting the Java-based Server in Suspended Mode

You can start the Java-based Server in suspended mode. This mode avoids immediate action (for example, workflow starts) directly after startup. Note: Using this mode is only possible if you start the server from a command line. You cannot use this mode if you start the server as service.

After startup of the server, you can control the components via the Web Admin interface.

You can define the startup parameters either directly in the command line of the runServer.bat (or .sh) file, or you can define the parameters in an extra file. In this case, you must define this file in the relevant command line of the runServer.bat (or .sh) file:

"%java_exe%" …​.. -cfg config.cfg

Then you must provide the parameters in this file:

install_path/ids-j-domain-Sn/bin/config.cfg

These options are available:

  • server.suspend=true
    Starts the server but stays in suspend mode. Use this mode if you need only monitoring access to the C++-based Server.

  • extension.load=-all
    Prohibits loading configuration extensions; for example, to handle request or approval workflows.

    You can define specific extensions not to be loaded. These extensions are available:

    com.siemens.idm.requestworkflow
    com.siemens.idm.realtimeworkflow
    com.siemens.idm.domcfg
    com.siemens.idm.backup

    If you define extensions.load=-com.siemens.idm.realtimeworkflow then the real-time workflow engine is not loaded.

    adaptor.load=option
    Lets you define which adaptors will be active after startup.
    These options are available:

    -all - disables all adaptors
    +all* - enables all adaptors
    -adaptor - disables the adaptor specified in adaptor
    +adaptor - enables the adaptor specified in adaptor

    Example:
    adaptor.load=-all +DeadLetterQueue +EntryChangeListener +MailListener

    Disables all adaptors and enables only the internal event listeners (no external events are processed).

  • adaptor.suspend=option
    Allows suspending all or parts of the available adaptors after startup.
    These options are available:

    \+all - suspends all adaptors
    -all - enables all adaptors
    -adaptor - enables the adaptor specified in adaptor
    +adaptor - disables the adaptor specified in adaptor

    Example:
    adaptor.suspend=-all +DeadLetterQueue

    Enables all adaptors besides the dead letter queue.

Do not forget to reset the parameters in the runServer file; otherwise, the server will always start in this mode.

Recovery

Standard operating system features are used to implement recovery (watchdog functionality) on the supported platforms.

  • On Windows, the standard recovery features for services are used. See the Recovery tab of the corresponding service.

  • Due to missing standard features, there is no watchdog mechanism available on Linux.

Auditing

The Java-based Server provides a consistent mechanism to handle audit logs produced by workflow activities. Audit is a type of long-lived history data. It allows an auditor to review business events. Audit log entries are self-explanatory. They carry all data that is necessary to understand the event and the result.

Samples of auditable events are:

  • A password has been modified successfully or the modification failed.

  • An approval workflow has been started.

  • Someone has approved a user-privilege assignment.

Read more about the format of the produced audit messages in the section "How Audit Trail Works" in the DirX Identity Provisioning Administration Guide.

The Java-based Server supports two audit handlers. Only one of them should be active at a time:

  • The file-based audit handler.

  • The JMS-based audit handler. If this one must be used, it first requires some manual installation steps. See the DirX Identity Installation Guide for details.

For configuring these audit handlers see the context sensitive help of DirX Identity Manager.

Statistics

The Java-based Server provides the following kind of statistical data:

  • Tables of counters held in server components or produced by workflows. They can be viewed by the Web administration of the server (Web Admin).

  • The statistics of each workflow stored in the LDAP configuration database. You can view this information with the DirX Identity Manager’s Monitor View.

Read more about the statistics feature of the Web Admin interface in the section "Using Web Admin" in the DirX Identity User Interfaces Guide.

DirX Identity collects another set of statistics data and stores them in the LDAP Connectivity configuration. In DirX Identity’s Connectivity view, select the Monitor View and open the Event based folder. You will find an entry for each workflow run. It contains the start and end time of the workflow, a table of statistical counters and a details list in the remark field. The statistic counters comprise the operations add, modify, delete and search and their result. The list below shows a brief summary for each request, containing the user whose password required changing and the result. This information is deduced from the audit log of each workflow. It is only visible if auditing for a specific workflow is enabled.

Logging

When the Java-based Server starts for the first time, default log levels are applied. They ensure that error and warning logs of all components are written.

The log files are stored in the logs folder of the server’s home directory. You can configure how many records are written in one log file. Log file names start with server* and contain the timestamp.

You can view the log files from the file system using a standard editor such as Notepad or you can use the Web Admin tool. From the main menu, select View log files in the Logging section. The system presents the current log files.

To set and change log levels, use the Web Admin tool.

Log levels are specified for Java classes or packages. The system supports you by presenting a list of logical components (server, several adaptors, connectors, …​) from which to choose. You can add class or package names by yourself in additional lines to specify your individual range of components to log.

Naming Schemes

Using multiple Java-based Servers for multiple domains requires an easy-to-understand naming scheme for service names, LDAP entry configuration names and file folder names.

Java-based Server Object Naming

The Java-based Server objects in the Connectivity view group (Connectivity → DirX Identity Servers → Java-based Servers) follow this naming scheme:

  • domain-Sn-hostname

where

domain

is the domain for which this server is running.

Sn

is the server number (counts per domain).

hostname

is the host name where this server is running.

Example for a server object:

My-Company-S1-myhost

Service Naming

The services on Windows use this naming scheme:

  • DirX Identity IdS-J-domain-Sn version

where

domain

is the domain this server is running for.

Sn

is the server number (counts per domain).

version

is the version.

Example for a server:

DirX Identity IdS-J-My-Company-S1 V8.3

File Folder Naming

The file folders in the installation area use this naming scheme:

  • ids-j-domain-Sn

where

domain

is the domain for which this server is running.

Sn

is the server number (counts per domain).

Example:

ids-j-My-Company-S1

Note that there is a folder ids-j.org which is a template used for creating new server connector frameworks. Do not change this folder and its content!

Resource Families

Use resource families to control the number of threads within a Java-based Server.

Each activity of a real-time or a request workflow is associated with a resource family: it requires that resource family. Java-based Servers provide resource families. An activity can only be processed on servers that host the required resource family.

Therefore, make sure you assign each relevant resource family to all of your Java-based Servers.

For each Java-based Server you must configure the number of threads per resource family. This allows you to influence to some extent the load distribution of certain workflow types between Java-based Servers: the slower a Java-based Server processes messages the fewer messages it will receive.

Use DirX Identity Manager to assign resource families and threads to a Java-based Server: select the server configuration entry in the Connectivity view (ConfigurationDirX Identity ServersJava Servers - domain), click the Resource Families tab, select the active resource families and then set the number of threads (two by default). Restart the server to make the changes effective. You can use Web Admin to check the configured number of threads.

Understanding the Pre-Configured Resource Families

DirX Identity comes with a set of pre-configured resource families. You can use these resource families, add additional ones or exchange them completely with your set of resource families. To keep it simple, we recommend using the default resource families and then extending them as needed.

System-specific resource families (fixed values, not customizable) include:

scheduler - the internal scheduler of the server, which handles timeout situations and triggers retry of activities.

workflowengine - this resource family is reserved for the workflow engine itself, which starts and controls workflow activities.

workflowscheduler - the scheduler for real-time workflow schedules.

For each target system type, there is one default policy, for example ADS, LDAP, JDBC, Notes, SPMLv1, and so on.

Some others are for request and maintenance workflows:

Apply - default thread for request workflow activities to run processes that instantiate, modify and delete object changes.

Calculate - default thread for request workflow activities to run processes that calculate something (for example a GUID).

Event_Maintenance - default thread to handle event-based processing activities.

Mail - default thread to handle activities that send mail requests (for example all error or notification activities).

Request_Workflow - default thread to handle Java-based join activities of provisioning workflows that provide manual provisioning via request workflows.

JMX Access to the Java-based Server

By default, JMX access to the Java-based Server needs password authentication.Authentication is performed by passing the user credentials to an LdapLoginModule which tries to bind with these credentials to the Connectivity LDAP server.When the bind is successful, the JMX access is successfully authenticated.

Per configuration, the LDAP user is cn=DomainAdmin,cn=*domain,dxmC=Users,dxmC=DirXmetahub*.

On the JMX client side, you only need to give the “DomainAdmin” as username and the appropriate password.This is configured in the file jmxldap.cfg in the bin folder of the Java-based Server.

If the SSL flag is activated in the system-wide configuration, JMX access is also secured by SSL.In this case, non-SSL access is not possible.

If SSL is configured for the Connectivity store, the authentication process to the LDAP server is also performed using an SSL connection.In this case, you need to make sure that the LDAP server’s root CA chain certificates are in the cacerts file of the Identity Java environment.

Managing the C++-based Server

This section describes how to manage the C++-based Identity Server (IdS-C), including how to:

  • Install, start, and configure C++-based server components

Server Components

The C++-based Identity Server (IdS-C) consists of a set of services that represent the required functionality.The main components start Tcl-based workflows according schedules and control their activities.A status tracker runs on exactly one IdS-C Server for which the dxmRunStatusTracker attribute is true.This status tracker is responsible for updating the status of Tcl-based workflows in the Monitor area of the Connectivity database.

Each server is represented by a service on Windows or a daemon process on UNIX.

On Windows:

  • DirX Identity IdS-C version

On UNIX:

  • DirX Identity IdS-C version

DirX Identity IdS-C version must run on each machine on which you have performed a C++-based Server installation.

All DirX Identity components communicate with each other using TCP/IP based protocols:

  • Data is read from and written to the configuration database in the LDAP directory (by default, port 389)

  • Messages are transferred between components via the Messaging service.

Starting Up the Server Components

The services that make up the C++-based Server normally start automatically when you boot your system:

The DirX Identity IdS-C service starts independently from the DirX Server service.

  • Service name on Windows: DirX Identity IdS-C version

  • Process name on Windows: dxmsvr.exe

  • Process name on UNIX: dxmsvr

  • Controlled processes on Windows: dxmmsssvr.exe

  • Controlled processes on UNIX: dxmmsssvr

The shell script install_path*/etc/S99dmsvr* starts the DirX Identity IdS-C service on Linux when the system runs in multi user mode.

The service uses the following configurable polling mechanism during start-up:

  • Try to connect to the LDAP server. If this fails, try again the number of specified steps after the specified time in the initialization file of the C++-based Server.

  • If there is no access to the LDAP server, the service start is aborted.

  • When the bind to the LDAP server is successful, the necessary information is read from the Connectivity configuration database (object Messaging Service in the Expert View: Configuration → Messaging Services).

If the DirX Identity IdS-C Service detects that a C-based Server crashed, the service restarts the server. Permanent threads (Status Tracker and Scheduler) running in the C-based Server are also checked regularly by the Keep Alive mechanism and restarted automatically if they are not responding. You can check the threads with the Get Server State command for each individual server.

You can use the following parameters in the [settings] section of the C-based Server initialization file (dxmmsssvr.ini) to define the polling parameters for each machine that runs a C-based Server:

  • timeout - the time between two polling cycles (the default is 45 seconds).

  • repeat - the number of retries to perform (the default is 10 times).

By default, DirX Identity tries to connect 450 seconds to the LDAP server.

Configuring the C++-based Server

The C++-based Server is controlled by the initialization file dxmmsssvr.ini, which is located in the install_path/server/conf subdirectory.

The parameters in the [settings] section are necessary for the registration of the server in the connectivity configuration database and for consistency check during startup:

  • dnServerName: the name of the C++-based Server’s configuration object in the connectivity configuration in the folder DirXmetahub Servers (default name: main)

  • host: the name of the server on which this C++-based Server instance runs (for example: abc123.myCompany.de or 123.54.76.11).

These two parameters are used to verify that the system is consistent (the right server registers to the correct server LDAP entry). If you change these parameters by hand, you must also change the corresponding parameters in the configuration database. The check is defined as:

The field dnServerName in dxmmsssvr.ini file is equal to the attribute dxmDisplayName of the C++-based Server object (the displayed name) and
the field host in dxmmsssvr.ini file is equal to the attribute dxmServerName of the service object to which the C++-based Server object points.

For the host field check, examine the link to the Service object of the relevant C++-based Server object. Please note that relevant for this field is the Server Name field in the Service object, not the IP Address field.

If this check fails, the next start of the C++-based Server will fail! Correct either the LDAP object or the dxmmsssvr.ini file.

  • cconnserver - whether the connector server is started (default is 0):
    0 - Connector server will not be started (no connectors are running).
    1 - Connector server is started (configured, active connectors are running).

  • timeout - the time between two polling cycles to connect to the LDAP server (the default is 45 seconds).

  • repeat - the number of retries to connect to the LDAP server (the default is 10 times).

  • encryptionmode - whether the server should use data encryption or not (the default is 0, which means no data encryption). Set this parameter to 1 if the server should use data encryption. In this case set the pin parameter, too.

  • server_restart - the number of server restarts the watchdog performs during startup.

  • IgnoreSaveStatusInfoError - whether or not errors in saving activity status info results are ignored (the default is 0, which means that errors are not ignored). If set to 1, errors from sending activity status info are ignored and the normal workflow execution continues.

The parameters in the [metadir] section are the parameters for the LDAP bind to the configuration directory:

  • server - the name of the directory server (for example: abc123.myCompany.de).

  • user - the distinguished name of the account which is used by the C++-based Server to access the LDAP directory (for example: cn=server_admin, dxmC=dirXmetahub).

  • pwd - the password for this user account (the default is dirx). See the section Password Handling for more information how to set passwords.

  • port - the LDAP server port (the default is 389 for non-SSL access).

  • ssl - whether or not SSL access to the LDAP server will be used (ssl=1). The default value is ssl=0. See the server SSL connections topic for more information.

  • cert-db-path - the path to the cert8.db file (only used when ssl is set to true). The default setting is empty.

The PIN for the current private key for decryption of technical bind passwords and attributes (and the previous PIN) and the key store password (for the SOAP secure key store) are maintained in the file install_path*/ssl/password.properties*.

Changing the Service Login Account (Windows)

On Windows only, you can change the account that the service uses to log on.To change it, run the Initial Configuration with the C++-based Server step again and change the account.

Server Password Handling

During startup, all DirX Identity servers require reading the relevant configuration information from the Identity Store.For authentication, passwords and PINs must be present in the password configuration files.The servers can read passwords or PINs in clear text or in encrypted format.

If you enter a password or PIN in clear text, the server reads it during the next startup, encrypts it and writes it to the configuration file.From now on the password and PIN information is no longer readable.If you are in doubt that the right password or PIN is set or if you need to set a new password or PIN, simply replace the encrypted value by the clear text value.During the next server startup, the password or PIN value is encrypted again.

Distributed Deployments and Scalability

This section describes the powerful capabilities of DirX Identity for setting up a highly distributed environment and supporting scalability.

Components that can be distributed include:

  • The LDAP server(s) with the DirX Identity domains and the Connectivity Database.

  • One or many Java-based Servers.Several such servers can be installed on the same host system - even several associated with the same Identity domain.

  • Message Broker instances on multiple servers.

  • One or many C++-based Servers.At most one such server can be installed on a single host system.

  • Other components, such as Business User Interface, Web Center, Identity Manager and the Web Services (both SOAP and REST).These components can be distributed freely.

Except for the communication protocols to the target systems, the required communication protocols are:

  • LDAP for access to the LDAP server(s).

  • TCP/IP for communication from and to the Message Brokers.

  • SOAP/HTTP to access the request workflow service.

  • SOAP/HTTP to the SPML Provisioning Service.

  • HTTP to the REST service.

  • JMX for the supervisors and Server Admin / Web Admin.

The configuration of the distributed server deployment is stored in the LDAP server with the Connectivity Database. The DirX Identity components Manager and Server Admin give an overview and allow changing these settings.

In addition to server and system information, the Connectivity Database stores configuration data on maintenance and Provisioning workflows and their schedules as well as their monitoring data.

While multiple DirX Identity domains can be stored in the same LDAP server, this is not possible with multiple Connectivity Databases: There can be at most one in a LDAP server. This results in the following deployment options for multiple DirX Identity domains:

  • Each domain is associated with its own Connectivity Database. As a result, you need one LDAP server per domain, which also contains the Connectivity Database.

  • Several domains are associated with the same Connectivity Database, which allows you to store all in the same LDAP server.

The next sections present different typical deployments for DirX Identity and their corresponding strengths and weaknesses, including:

  • "All-on-one-machine" deployment

  • Distributed deployment of Java-based Servers with Java-based provisioning and request workflows

  • Distributed deployment of C++-based Servers with Tcl-based workflows and issues with using a shared file system in distributed deployments.

All on One Machine

The deployment for a DirX Identity domain requires as a minimum the following components:

  • The Connectivity Database.

  • The DirX Identity domain.

  • One Java-based Server with the Request Workflow Timeout Check, all adaptors activated and the scheduler for the Java-based workflows.

  • One C++-based Server controlling all Tcl-based workflows and the status tracker.

  • One Message Broker.

Note that one Java-based Server is associated with one DirX Identity domain and cannot serve multiple domains. But one C++-based Server can run Tcl-based workflows for several DirX Identity domains.

In the simplest installation, all of these components can be deployed to the same host system. This variant is the easiest to set up and maintain.

Distributing Java-Based Servers

As a Java-based Server can serve only one domain, you need at least as many Java-based Servers as domains: at least one per domain.

You can also have multiple Java-based Servers for one domain. This configuration helps you to distribute load: request, provisioning and event-based workflows.

Read the section about naming schemes to understand how services, LDAP configuration objects and file folders are structured.

Distribution Criteria

A Java-based Server can run several types of workflows: event-driven and scheduled workflows for provisioning, event-driven workflows for entry changes, scheduled workflows for maintenance and request workflows for approvals.

All of them can run on every server and all of them can run in parallel on more than one server.

To run workflows of a certain type on a Java-based Server, you must activate the corresponding adaptors. A workflow is only started via a message. The messages contain the events to trigger, for example, the provisioning of an account or group and the requests to start a workflow by schedule or manually from Identity Manager. JMS adaptors read the messages from the specific queues of the Message Broker and start the appropriate workflow.

Here is a list of workflow types, associated adaptors and queues. For a short description of the queues, see the section “Managing the Message Broker” → “Using Messages in DirX Identity” → “Queues of the Java-Based Server” in this guide:

Workflow Type Adaptor Queue (without the domain prefix)

Provisioning

Provisioning Request Listener

dxm.request.provisiontots._default

Provisioning Request Start Workflow Listener

dxm.request.workflow.provisiontots._default

Import to Identity Listener

dxm.request.importtoidentity._default

Password Changes

Password Change Listener

dxm.event.pwd.changed

Account Password Change Listener

dxm.event.SvcTSAccount.pwd

Set Account Password Listener

dxm.setPasswordRequest._default

Request Workflows

Request Workflow WorkflowEngine Listener

dxm.requestworkflow.workflowengine

Request ActivityTask Listener

dxm.request.activityTask

Mail

Mail Listener

dxm.notify.mail

Text Message Listener

dxm.notify.sms

Maintenance

Entry Change Listener

dxm.event.ebr

Entry Change Start Workflow Listener

dxm.request.workflow.ebr

You can also run workflows for a selected connected system on dedicated servers. For details on how this works, see the section “Separating Traffic for Selected Connected Systems”.

Some components must run on exactly one server per domain; these are the Java scheduler and the Request Workflow Timeout Check (also called FullCheck).

The Java scheduler starts workflows by sending a start message to the Message Broker. There are two special adaptors responsible for consumption of these messages: the Entry Change Start Workflow Listener and the Provisioning Request Start Workflow Listener. The Entry Change Start Workflow Listener needs to be co-located with the Entry Change Listener; the Provisioning Request Start Workflow Listener needs to be co-located with the Provisioning Request Listener.

Within a Java-based Server, resource families control how many threads a workflow - more precisely, an activity within a workflow - can use. To some extent, this also influences the number of events the server processes. The more threads you reserve for a resource family, the more events or workflows can be processed by that server and the more events it obtains from the Messaging service.

Several Java-based Servers - of the same domain or of different ones - can run on the same physical system. The number of CPUs determines whether this makes sense: the more CPUs there are, the more threads can run in parallel.

Configuring One Java-based Server per Domain

The simplest deployment is to run one Java-based Server per domain. This server then runs the Java scheduler, all real-time and all request workflows.

When one configuration database covers several provisioning domains, then you must set the flag Include domain in topic at the corresponding domain object.

This setting forces the JMS clients to include the domain name into the queue and topic names. Especially JMS adaptors in a Java-based Server only read from queues whose names start with their domain name.

In the Connectivity Database, follow the wizards and store the schedules, workflows, jobs and connected directories in domain-specific folders. A Java-based Server only loads workflows from its associated domain folder.

Configuring Multiple Java Servers per Domain

To set up multiple Java-based Servers per domain:

  • Run the configurator (either Configuration or Initial Configuration).

  • Define the domain in the Domain Configuration step

  • Select Create a new Java Server from the drop-down list of the Server to update or create field of the Java-based Server step.

  • Define the relevant parameters. Make sure you use free ports.

  • Repeat this step for additional Java-based Servers you want to create.

  • Start DirX Identity Manager (Connectivity view group) and select Expert View.

  • Open ConfigurationDirX Identity ServersJava Serversdomain. You should see all your configured server instances and if you open them all movable adaptors.

  • Select Manage IdS-J Configuration from the context menu of a Java-based Server node.

  • Assign the movable adaptors, the scheduler and the request workflow processing to the servers. Deactivate unused adaptors.

  • Click OK to store the configuration or Cancel to abort it.

  • Restart all Java-based Servers to load the changed configuration.

  • Use Server Admin to check that the adaptors are configured correctly.

Separating Traffic for Selected Connected Systems

You can separate the traffic for synchronizing selected connected systems completely from the synchronization of other connected systems. It is possible to dedicate the provisioning of a target system to one or more Java-based Servers and to separate threads within a Java-based Server.

Reasons for such a configuration can include:

  • Separating the traffic for target systems with many events from others

  • Separating slow target systems from others

  • Running workflows for a specific target system behind the firewall

  • Support for file-based workflows where the files to import or export are only accessible from a dedicated system

  • Always running workflows for one target system on the same server for easier problem analysis

  • Support for the DirX Audit History synchronization workflows that are not associated with any target system

Within a Java-based Server, the threads for processing workflows for such selected connected systems are decoupled from those processing other connected systems. So even if you have only one Java-based Server deployed, the provisioning of a slow target system does not slow down the provisioning of the other target systems.

Note that this feature can also be applied to connected directories that are not associated with any target system. Prominent examples are source systems from where entries, namely users are imported to the Identity Store. Another example is the DirX Audit History Database with its synchronization workflows.

To assign a target system or a cluster of target systems to a Java-based Server, assign the corresponding connected directory (a cluster has only one connected directory) to the server(s):

  • In DirX Identity Manager’s Connectivity view group, select Expert View.

  • In the folder Connected Directories, select the connected directory. Open the tab Connected Directory.

  • In the section Associated Server, select one or more Java-based Servers.

  • If you want to have more than one thread processing provisioning requests for this connected system, enter the number in the optional Listeners per Target System. The default is 1. Note: this number applies only to the queues dxm.request.provisiontots and dxm.request.importtoidentity (see below). For processing of password changes and for running a complete validation or delta synchronization, one thread per queue should be enough. As an exception, if the connected directory has no associated target system, the number of threads is applied to the queue dxm.request.workflow.provisioning. In this special case, it is possible to run multiple full or delta synchronization workflows in parallel.

To activate this configuration, either:

  • Restart the servers or

  • Select each server in ConfigurationDirX Identity ServersJava Servers and then select Load IdS-J Configuration from the context menu.

The servers use the following queues:

  • domain.dxm.request.provisiontots.target system identifier

  • domain.dxm.request.workflow.provisioning.target system identifier

  • domain.dxm.setpasswordrequest.target system identifier

  • domain.dxm.request.importtoidentity.target system identifier

If there are target systems associated with the connected directory, the target system identifier is built using the attributes type, cluster and domain of the target system in lowercase: type*.cluster.domain. For a target system that is part of a cluster, the domain part is empty and the target identifier is built as type.cluster. For example, an Active Directory with a forest name “Europe” and domain “Germany” has the identifier *ads.europe.germany. If this target system is part of a cluster, the identifier is ads.europe.

If there are no target systems associated with the connected directory:

  • Messages for starting a workflow providing its DN go to the queue domain*.dxm.request.workflow.provisioning* and the target system identifier is built using the type and the display name of the connected directory in lowercase: type*.displayname. For example, a file with the display name “LDIFfile” has the identifier *ldif.ldiffile.

  • For messages to import a given entry to Identity Store, the queue name domain*.dxm.request.importtoidentity* is extended with the whenApplicable section of the workflow identifier. For example, a workflow with type “ldap”, cluster “import” and domain “userldap” has the identifier ldap.import.userldap. This allows running workflows to import users and business objects from the same source directory in separate threads.

Please avoid using special characters in type, cluster, and domain names as they are used to build queue names dynamically. Especially avoid using the following characters: “.”, “*”, “>”, “?”, “\”, “/”.

The adaptors for the queues domain*.dxm.request.provisiontots*, domain*.dxm.request.workflow.provisioning*, domain*.dxm.setpasswordrequest*, and domain*.dxm.request.importtoidentity* dispatch the messages to either the appropriate target system specific queue or to the respective default queue (for example, domain*.dxm.request.provisiontots._default*). For information about which servers process the default queue messages, see the section “Distribution Criteria”.

These dispatchers and the adaptors for the target system-specific queues follow a behavior that is different from those for the normal provisioning queues:

  • They do not store the received messages in their own file repository and therefore need no special handling for high availability. Instead, the adaptors process each message immediately and acknowledge it to the Message Broker only when processing is finished. In case of breakdown, the not-yet-acknowledged messages are still available in the Message Broker and are delivered when the adaptor re-connects.

  • They do not use the workflow engine. Instead, they perform the error handling on their own. They pass the messages directly to the join activity of the workflow. If an error occurs, they send the message again to the Broker with a delay according the configured retry wait time. If that is not possible because the error is not considered temporary or the retry limit is reached, the adaptor runs the error activity and sends the message to the Dead Letter Queue.

You can monitor the dispatchers and adaptors both in WebAdmin and with tools like Nagios using JMX MBeans.

With WebAdmin, select the following items in the left-hand tree:

  • Provision Dispatchers - in the details view at the right, you’ll see a table with the queues the dispatchers listen to and the number of messages received and failed; that is, those that could not be forwarded to the target queues.

  • Provision TS Listeners - in the details view at the right, you’ll see a table with the queues target system-specific adaptors are listening to and the number of messages that have been received and processed either successfully or not. Note that re-delivered messages in case of temporary errors are only delivered to the same target system-specific queue and not again to the queue on which the dispatcher is listening. So, they are visible only in this view.

The object names of the JMX MBeans for dispatchers and target system-specific adaptors start with com.siemens.idm:type=idsj and then identify the listener and the queue by the parameters “topic” and “name”:

  • Dispatchers - com.siemens.idm:type=idsj,topic=ProvMsgDispatcher,name=*queue, where values for queue are *provisiontots, setpasswordrequest, importtoidentity and workflow.provisiontots.

  • Target system-specific adaptors - com.siemens.idm:type=idsj,topic=ProvTSListener,name=*queue.target-system where queue names are the same as for the dispatchers and target-system is in the format type.cluster.*resource.

Distributing C++-based Server Components

In all configurations, we assume that we are handling a two-step Tcl-based workflow (a workflow with two activities) that exports data from the Identity store and imports data into Active Directory. Other types of workflows with only one activity (for example, an LDAP2LDAP workflow) or with more than two activities can be discussed in a similar way.

We assume that all computers that belong to the DirX Identity domain must be time synchronized using operating system mechanisms. Otherwise, scheduling conflicts or incorrect runs of a distributed workflow can occur.

For the two-step workflow, the following typical configurations exist:

  • All parts on central server

  • Target activity distributed

  • All parts on target server

As a minimum for running Tcl-based workflows, the following components are required:

  • One C++-based Server (IdS-C). It contains the status tracker and a scheduler for Tcl-based workflows.

  • One Message Broker.

  • The LDAP server storing the Connectivity Database and the identity domain (users, accounts, groups, and so on) to be provisioned.

For understanding the discussions below, you should be aware of the following:

  • A Tcl-based workflow is associated with a system, thus with the IdS-C server running on this system. The scheduler and the workflow engine in this server are responsible for processing and controlling the workflow.

  • A target activity is associated with a system and with the IdS-C server on that system. This IdS-C server is responsible for processing and controlling this activity.

The following sections discuss recovery and safety mechanisms as well as security issues of the different alternatives.

All Items on Central Server

In general, it makes sense to keep all components on the same machine to optimize performance and minimize the dependency from network malfunction.

In this configuration, both the C++- and the Java-based Server together with the messaging service, the workflow and all activities run on the central server.

The IdS-C server reads workflow configuration data from the Connectivity Database. In the first activity, metacp reads data from the Identity store via LDAP and stores the processed result to a file. In the second activity, the ADS agent reads this file and writes to Active Directory on the target machine via the native interface. The following figure illustrates this configuration.

All Items on a Central Server
Figure 2. "All Items on a Central Server" Configuration

Strengths:

  • All DirX Identity processes run on the central machine, making administration easy.

  • The data file does not need to be transferred via the network. It is only visible on the central machine. No shared file system needs to be set up and maintained.

  • The target server (hosting Active Directory) does not require the installation of any DirX Identity component.

  • All communication except for the one to the target system API depends only on a running central machine (especially all JMS messages can always be sent and delivered). The availability of this machine defines the availability of the entire DirX Identity domain.

Weaknesses:

  • If all workflows run on the central machine, heavy load could be the result. Be careful to distribute the schedules accordingly to avoid this situation.

  • Data is transferred via the native target system API (in our example, the Active Directory API). Security is determined by the features of this interface.

  • You must ensure that the network connection is fast enough for the specific agent.

Recovery:

For the basic recovery features, see the section "Recovery and Safety Mechanisms".

If the central machine breaks down and is re-booted during a workflow run, it cannot be guaranteed that the workflow will be restarted.

Target Activity Is Distributed

In this configuration, the Identity Servers with the messaging service, the workflow and the first activity (metacp) run on the central machine.

metacp reads data from the Identity store via LDAP and writes the mapping result to a file. The file must be transferred via the network or made accessible through a shared file system. The ADS agent then reads it and imports the entries to Active Directory on the target machine via the native interface. The following figure illustrates this configuration.

Target Activity is Distributed
Figure 3. "Target Activity is Distributed" Configuration

Strengths:

  • The target system interface is accessed on the target machine, so it is not visible on the network.

  • You can distribute load over the network.

  • The configuration information that is transferred via the LDAP connection between the different C++-based Servers can be secured via SSL.

  • The scheduler runs only on the central machine and can therefore send the relevant messages to start workflows to the messaging service. The messages are stored permanently and are resent until the deviation time is reached.

Weaknesses:

  • A C++-based Server and the ADS agent must be installed and maintained on the target machine.

  • The data file must be transferred via the network or made accessible via a shared file system. In both cases data is visible on the network. Use the encrypted file transfer to secure the file transfer over the messaging service.

  • Shared file systems if used are an additional administrative task.

  • If the network is not available, the C-based Server on the central machine cannot send messages to the C-based Server on the target machine to start the agent. The workflow will fail in this situation. It is started again at the next scheduled time or after a defined retry interval.

  • If the network is not available, the C++-based Server at the target machine cannot deliver status information to the messaging service. This information is lost.

Recovery:

For the basic recovery features, see the section "Recovery and Safety Mechanisms".

If the central machine breaks down and is re-booted during a workflow run, it cannot be guaranteed that the workflow will be restarted.

All Items on Target Server

In this configuration, the workflow and both activities run on the target machine, while the LDAP server and the messaging service remain on the central machine.

metacp reads data from the Identity store via LDAP over the network and writes the mapping result to a file. The ADS agent then reads this file and writes it to Active Directory on the same machine via the native interface. The following figure illustrates this configuration.

All Items on a Target Server
Figure 4. "All Items on a Target Server" Configuration

Strengths:

  • The target system interface is accessed on the target machine, so it is not visible on the network.

  • You can distribute load over the network.

  • The data file does not need to be transferred via the network. It is only visible on the target machine. A shared file system does not need to be set up and maintained.

Weaknesses:

  • A C++-based Server, metacp and the ADS agent must be installed and maintained on the target machine.

  • If the network is not available, the C++-based Server on the target machine cannot send messages to the messaging service to start the agents. The workflow will fail in this situation. It is started again at the next scheduled time.

  • If the network is not available, the C++-based Server on the target machine cannot deliver status information to the messaging service. This information is lost.

Reduce File Handling in Status Area

In order to improve scalability, you should consider reducing status creation and file handling of Tcl-based workflows. Identity Manager allows you to configure this on a fine-grained basis:

  • For each file, set the Save Mode and Copy to Status Area flags correctly to avoid unnecessary saving of files, especially if the file size is large.

  • Compressed status entries can reduce workload on the DirX Identity status tracker. Choose an appropriate detail level of the Status Compression Mode option at the central configuration object or at individual workflow objects. You can reduce the detail level or completely suppress status entries for workflows that succeeded.

Setting Up a Shared File System for Distribution

You can run workflows in a distributed environment. By default, the file service automatically handles all necessary file transfers when the activities run on different machines. To enhance performance, you can set up shared file system information in the relevant tab of the DirX Identity server object.

Follow this general procedure:

  • Select the correct C++-based Server for each workflow and activity. The best method for viewing and adjusting these parameters is to use the workflow structure view, which you can access from the Global View and Expert View.

  • DirX Identity detects necessary file exchange between activities automatically if the connectivity is defined by channels.

  • You cannot use the system account in a Windows environment to run your C++-based Servers (they are not allowed to access network resources).Use other accounts that can access network resources instead.

See also the section "Workflow Design Rules" for valuable hints about this subject.

Distributing Message Broker Instances

Multiple Message Broker instances can be spread over IdS-J and external servers.As only one of these Message Brokers is accessible by the clients, this setup focuses on high availability and not on scalability.

The important deployment configuration is related to the persistent message store, which must be located on a shared drive to which all Message Broker instances have access.

High Availability and Recovery

DirX Identity has some built-in mechanisms that avoid error situations or help to recover from them:

  • The messaging service stores messages permanently in its messaging repository.

  • The scheduler starts workflows at a defined time (at the latest during the deviation time after the workflow start time).Thus, you can be sure that your workflow does not run at times when it should not run.

  • You can define a retry interval for each schedule.If a workflow fails and the deviation time is not over, DirX Identity restarts the workflow until it is okay.This feature can overcome temporary errors (for example, the network is temporarily not available).

  • All messages except for status tracker messages have a lifetime that guarantees that actions are not started after the defined timeout of the workflow.The messaging service deletes these messages automatically when they have timed out.

  • The status tracker messages have no timeout set.Thus, all status messages that the messaging service has saved are delivered when the network and the messaging service are available again.

During runtime, DirX Identity produces and manages data in various repositories, including:

  • Repository of Message Broker - contains the messages that have not yet been consumed by Java server adaptors; for example, password changes or events for real-time provisioning.The repository should be located on a shared network device.

  • Repositories of adaptors in Java-based Servers - contain the messages for which a real-time workflow has been triggered, the Dead Letter Queue and - if High Availability is enabled - also the backup of the monitored adaptors.

  • LDAP directory - contains configuration and status data; for example, the current state of a request workflow or information about the last delta run of a Provisioning workflow.

Note that the adaptors for target system-specific messages as well as the Resolution Adapter don’t have a repository.They process each message immediately and therefore can acknowledge and thus delete it at the Message Broker automatically after it has been processed.

When high availability is enabled, the adaptor repositories of one Java-based Server are backed up in real-time from the backup adaptor in another, the monitoring Java-based Server.In case of a system crash, either the monitoring supervisor automatically or an administrator manually using Server Admin instructs the monitoring Java-based Server to restore the messages from the repository backups and send them to the message broker.

In addition, or as an alternative, you can perform a scheduled backup of the Java-based Server adaptor repositories; for example, every day.This helps you to restore the system (recovery) or to set up the system at another location (disaster recovery).For configuration of a joint backup workflow, see the section "Joint Backup Workflow" in the chapter "Maintenance Workflows" in the DirX Identity Application Development Guide.Currently you should set up a directory backup at the scheduled backup time by hand.

Diagnostic Information

The Java-based Server and the C++-based Server write valuable information during server startup.

The servers write a diagnostic file during each start.It consists of two sections:

System Information - provides information about the hardware and operating system in use.

Server Information - provides a set of server-specific properties and information.

This information is useful for determining the conditions of a specific server run; for example, the one where an error occurred.

For detailed information, see the DirX Identity Troubleshooting Guide.

Managing Daylight Savings Time

All parameters that define dates in schedules within DirX Identity are stored in GMT format, which means that a workflow runs at fixed world time.However, the display of these parameters is in local time to allow fpr consistent handling of workflow starts in a worldwide distributed DirX Identity scenario.

Working in a country that switches between summer and winter time (daylight savings time) may require you to adjust the schedules so that they start in relation to fixed local time.For this purpose, you can use the Tcl script shiftSchedules.tcl in the folder install_path/tools/scheduling.

To adjust the timing:

  • Customize the script’s bind parameters before you use it, including the bind password.We recommend that you operate on a copy of the script and protect the script against unauthorized read/write access.

  • If the clock has been moved forward one hour, adjust the schedules (-1 h) with the command:*
    metacp shiftSchedules.tcl backwards*

  • If the clock has been set back by one hour, adjust the schedules (+1 h) with the command:
    metacp shiftSchedules.tcl forwards

  • The script writes a log file shiftSchedules.log into the same folder in which shiftSchedules.tcl is located.

  • You can also call the script with an absolute path, for instance:*
    metacp /home/metatest/myLocation/shiftSchedules.tcl forwards*

  • Scripts that call this script must contain an initialization part that provides the correct runtime environment for executing metacp.For UNIX, this is done by a directive in the form install_path/.dirxmetarc.

  • Use the operating system scheduler to start this script regularly when winter or summer time begins.

Connector Frameworks

DirX Identity provides two types of connector framework that can be used to control additional custom target system APIs:

  • Identity Connector Framework for Java - allows controlling event based synchronizations to target systems that provide Java interfaces.

  • Identity Connector Framework for C/C++ - allows controlling event based synchronizations to target systems that provide C or C++ interfaces.

The following sections provide a high-level overview of each framework.For detailed information, see the DirX Identity Integration Framework Guide.

Identity Connector Framework for Java

The Identity Connector Framework for Java provides a comprehensive set of functionality that is built completely on Service Provisioning Markup Language (SPML). The following figure illustrates its components.

Identity Connector Framework (Java) Components
Figure 5. Identity Connector Framework (Java) Components

Its most important components are:

  • Reader - allows for the transformation of external formats into SPML format (for example, LDIF change files)

  • Writer - transforms internal SPML format to external formats (for example, LDIF content files)

  • Controller - controls all other components

  • Connector Interface - the only interface the Connector Plugin must recognize. Provides all information in SPML format.

  • Connector Plugin - the piece of custom code that maps the SPML internal format to the API calls for a specific target system.

The following example explains how a connector could work (let’s assume the connector is waiting permanently for password changes):

  • After startup, the controller gives control to the reader.

  • The reader either reads events from a channel in event-driven workflows or from a file in scheduled workflows. It transforms the data into SPML format; for example, a modify request to set a password.

  • The controller takes the SPML request and passes it to the request transformer. This component knows how to map the attributes in the SPML request and generates a modified SPML request. The request transformer is optional.

  • The controller takes this request and passes it to the connector plugin. The connector performs the modification request with the delivered attributes.

  • The connector returns an SPML response with the result of the operation.

  • The controller passes this information to the optional response transformer, which maps the information accordingly and returns it to the controller.

  • The controller takes it and informs the writer to send the response. In an event based workflow, the writer puts the response into the out channel of the activity. From there the workflow engine passes it to the adaptor. A SOAP adaptor would return this to the SOAP client. The message queue adaptor ignores it. In case of a batch like job, the response writer typically stores the response into a file.

Identity Connector Framework for C/C++

The C/C++ Connector Framework is much simpler than the Java one. For example, it does not have its own controller because it uses the Java-Connector Framework controller. Nevertheless it provides a lot of helpful services that the connector plugin can use. The following figure illustrates the framework components.

Identity Connector Framework (C/C++) Components
Figure 6. Identity Connector Framework (C/C++) Components

Its most important components are:

  • Adaptors - Listeners and Senders allow communication with either JMS-based or HTTP/SOAP based components.

  • A Configuration Manager transfers the configuration information from the connectivity configuration to all components, including the connector plugin.

  • Transport Receivers and Transport Senders are responsible for marshalling and un-marshalling the SPML data from streams to C++ SPML objects and vice-versa.

  • For each connector plugin, there is a Wrapper that loads and instantiates the plugin, supplies the configuration data and relieves the plugin from internal communication handling.

  • A Logging component provides standard logging mechanisms

  • Connector Interface - the only interface the connector plugin must recognize. Provides all information in SPML format.

  • Connector Plugin - the piece of custom code that maps the SPML internal format to the API calls for a specific target system.