Blog dedicated to Oracle Applications (E-Business Suite) Technology; covers Apps Architecture, Administration and third party bolt-ons to Apps

Tuesday, February 24, 2009

Concurrent Processing Architecture

The current Concurrent Processing architecture with Global Service Management consists of the following processes and communication model, where each process is responsible for performing a specific set of routines and communicating with parent and dependent processes.

Internal Concurrent Manager (FNDLIBR process) - Communicates with the Service
Manager.

The Internal Concurrent Manager (ICM) starts, sets the number of active processes, monitors, and terminates all other concurrent processes through requests made to the Service Manager, including restarting any failed processes. The ICM also starts and stops, and restarts the Service Manager for each node. The ICM will perform process migration during an instance or node failure. The ICM will be active on a single node. This is also true in a PCP environment, where the ICM will be active on at least one node at all times.

Service Manager (FNDSM process) - Communicates with the Internal Concurrent Manager,
Concurrent Manager, and non-Manager Service processes.

The Service Manager (SM) spawns, and terminates manager and service processes (these could be Forms, or Apache Listeners, Metrics or Reports Server, and any other process controlled through Generic Service Management). When the ICM terminates the SM that resides on the same node with the ICM will also terminate. The SM is ‘chained’ to the ICM. The SM will only reinitialize after termination when there is a function it needs to perform (start, or stop a process), so there may be periods of time when the SM is not active, and this would be normal. All processes initialized by the SM inherit the same environment as the SM. The SM’s environment is set by APPSORA.env file, and the gsmstart.sh script. The TWO_TASK used by the SM to connect to a RAC instance must match the instance_name from GV$INSTANCE. The apps_ listener must be active on each CP node to support the SM connection to the local instance. There should be a Service Manager active on each node where a Concurrent or non-Manager service process will reside.

Internal Monitor (FNDIMON process) - Communicates with the Internal Concurrent Manager.

The Internal Monitor (IM) monitors the Internal Concurrent Manager, and restarts any failed ICM on the local node. During a node failure in a PCP environment the IM will restart the ICM on a surviving node (multiple ICM's may be started on multiple nodes, but only the first ICM started will eventually remain active, all others will gracefully terminate). There should be an Internal Monitor defined on each node where the ICM may migrate.

Standard Manager (FNDLIBR process) - Communicates with the Service Manager and any
client application process.

The Standard Manager is a worker process, that initiates, and executes client requests on behalf of Applications batch, and OLTP clients.

Transaction Manager - Communicates with the Service Manager, and any user process initiated on behalf of a Forms, or Standard Manager request. See Note 240818.1 regarding Transaction Manager communication and setup requirements for RAC.

Parallel Concurrent Processing (PCP) is activated along with Generic Service Management(GSM); it can not be activated independent of GSM. With parallel concurrent processing implemented with GSM, the Internal Concurrent Manager (ICM) tries to
assign valid nodes for concurrent managers and other service instances. Primary and secondary nodes need not be explicitly assigned. However, you can assign primary and secondary nodes for directed load and failover capabilities.

Note: In previous releases, you must have assigned a primary and secondary node to each concurrent manager.

Internal Concurrent Manager:
The Internal Concurrent Manager can run on any node, and can activate an deactivate concurrent managers on all nodes. Since the Internal Concurrent Manager must be
active at all times, it needs high fault tolerance. To provide this fault tolerance, parallel concurrent processing uses Internal Monitor Processes

Internal Monitor Processes:
The sole job of an Internal Monitor Process is to monitor the Internal Concurrent Manager and to restart that manager should it fail. The first Internal Monitor Process to detect that the Internal Concurrent Manager has failed restarts that manager on its own node.

Only one Internal Monitor Process can be active on a single node. You decide which nodes have an Internal Monitor Process when you configure your system. You can also assign each Internal Monitor Process a primary and a secondary node to ensure failover protection.

Internal Monitor Processes, like concurrent managers, have assigned work shifts, and are activated and deactivated by the Internal Concurrent Manager.

However, automatic activation of PCP does not additionally require that primary nodes be assigned for all concurrent managers and other GSM-managed services. If no primary node is assigned for a service instance, the Internal Concurrent Manager(ICM) assigns a valid concurrent processing server node as the target node. In general, this node will be the same node where the Internal Concurrent Manager is running. In the case where the ICM is not on a concurrent processing server node, the ICM chooses an active concurrent processing server node in the system. If no concurrent processing server node is available, no target node will be assigned. Note that if a concurrent manager does have an assigned primary node, it will only try to start up on that node; if the primary node is down, it will look for its assigned secondary node, if one exists. If both the primary and secondary nodes are unavailable, the concurrent manager will not start (the ICM will not look for another node on which to start the concurrent manager). This strategy prevents overloading any node in the case of failover.

The concurrent managers are aware of many aspects of the system state when they start
up. When an ICM successfully starts up it checks the TNS listeners and database instances on all remote nodes and if an instance is down, the affected managers and services switch to their secondary nodes. Processes managed under GSM will only start on nodes that are in Online mode. If a node is changed from Online to Offline, the processes on that node will be shut down and switch to a secondary node if possible.

Concurrent processing provides database instance-sensitive failover capabilities. When an instance is down, all managers connecting to it switch to a secondary middle-tier node. However, if you prefer to handle instance failover separately from such middle-tier failover (for example, using TNS connection-time failover mechanism instead), use the profile option Concurrent:PCP Instance Check. When this profile option is set to OFF, Parallel Concurrent Processing will not provide database instance failover support; however, it will continue to provide middle-tier node failover support when a node goes down.

For the Internal Concurrent Manager you assign the primary node only.


Node Names:
The Concurrent Managers start up and call to the UNIX executable to determine the node name of the machine. This call is 'uname -a' and returns information about the operating system and the name of the node.

Each and every request that runs, takes this node name, time, absolute path to the output file and other information and puts an entry into the FND_CONCURRENT_REQUESTS table with the request ID.

This node name can be an alias but it should be something that the domain name server will resolve.

This node name value can be changed by running the UNIX command setuname or the hostname command. An example of this would be: setuname -n newnodename

Once you change the node name, be sure to bring down the concurrent managers (if they are not already) and bring them back up. Go to the Concurrent Manager Administer screen to see the new node name on the row with the Internal Manager information.

Metalink Notes: 241370.1, 602899.1

No comments: