Monday, April 28, 2014

Difference between Route - Publish and Service Callout actions in OSB

There are mainly 3 types of MEPs (Message Exchange Pattern) ,to interact between end applications.
1) Synchronous interaction
2) Asynchronous interaction
3) One way interaction

Synchronous interaction is basically to communicate with end point and wait for its reply to process the response.
Service Callout: Service Call out is exactly meant for that. If you use Service Call out in any of Request or response pipeline proxy service will wait for the reply from the end application.
So when ever you are thinking to interact with synchronous process you can opt for Service call out. But this is not the only action which you can use when interacting with synchronous process.

Publish: As mentioned earlier there can be three types of interactions between application. For asynchronous interaction OSB does not provide specifically any action and it also does not suite its architecture. So if yo are thinking to implement asynchronous service interaction better to choose Bpel(Oracle SOA) instead of OSB, though you can implement asynchronous interaction in OSB but it is not straight forward like Bpel.
Apart from asynchronous interaction there is another interaction named one way interaction.
OSB provide an action for this type of interaction named Publish. Publish action only sends the request and never ever waits for any response(default behavior if required one can use routing option to change its behavior)  from the back end service even if any exception happens you can not catch it using publish action because it doesn't block the request thread and doesn't wait for reply.

So far we cam to know about Service call out and publish action now we will discuss the most important action in a OSB proxy flow for integrating with different application, action is called route.

Route: Route action has some special feature in a proxy flow. if you are using route in your proxy flow then route will be the last action in the flow and you can not add any other action after route action. It is route action where context is switched, what i meant to say about context switching is switching between request context to response context, that means in route action request body will be mapped to response body and similarly for all the context variables (body , fault etc).
Now inside a route you can use a routing option and you can communicate with back end applications using this routing option. you can interact with both Synchronous and oneway process using routing option.
More over it has a feature which makes the processing/interaction happen within the same transnational boundary (unless configured separately) by doing this it ensures that the response comes back from the back end application without having any dependency on the process type (Synchronous/oneway).

There is one action called routing option which is very useful for managing threads/transaction for the interaction.

Few more differentiating parameters:
Route
  1. Last node in request processing.  It can be thought of as a bridge between request pipeline processing and the response pipeline processing.
  2. You can only execute one route in your Proxy Service.
  3. Can only be created in a route node.
  4. OSB will wait for the Route call to finish before continuing to process.
    1. If you are calling a Business service and you specify Best Effort for QoS (Quality of Service), then OSB will release the thread it is holding while the business service executes.
    2. If you are calling a Business service and you specify Exactly Once or At Least Once for QoS, then OSB will hold onto the thread while the business service executes.
    3. If you are calling a local Proxy service, then OSB will hold onto the thread until the Proxy service finishes executing.
Service Callout
  1. Can have multiple Service Callout nodes in a Proxy service.
  2. Pipeline processing will continue after a Service Callout.
  3. Can be invoked from the request and/or response pipelines.
  4. Used to enrich the incoming request or outgoing response. For example, a call to get a country code.
  5. Used for real time request/response calls (Synchronous calls).
  6. OSB will hold a thread and not continue until the Service Callout completes.
  7. Can tie up resources and degrade performance under heavy loads.
Publish
  1. Can be synchronous or asynchronous
    1. If you are calling a business service with a Quality of Service of Best Effort , then it will be an asynchronous call.
    2. If you call a business service with a Quality of Service of Exactly Once or At Least Once, OSB will wait until the processing completes in the business service completes before proceeding and it is effectively a synchronous call.
    3. If you are calling a local proxy service, OSB will wait until the processing in the local proxy service completes and it is effectively a synchronous call.
  2. Can be invoked from the request and/or response pipelines.
  3. Best to use when you do not need to wait for a response from the process you are calling (Fire and Forget.... Asynchronous Calls)

Thursday, April 24, 2014

How to limit number of rows returned from Oracle at the JDBC data source level?

As we all know , DB poller is used to retrieve different kind of data from the Database.

Consider a scenario , where in you are polling a table , which is rapidly updated by another process. So now every polling cycle , the eligible records for the DB poller , will be huge in number.

And in cases where the data retrieved is metadata related to some other information like Batch id or OrderId , which are used in return  to retrieve more information , the server will not be able to handle such amount of data, even in case it handles there will be a huge performance impact.

How do we restrict this kind of behaviour ?

During configuring our DB poller , we have few performance related parameter which we can use to restrict this kind of behaviour.


Database rows per XML document - MaxRaiseSize

On read (inbound) you can set maxRaiseSize = 0 (unbounded), meaning that if you read 1000 rows, you will create one XML with 1000 elements, which is passed through a single Oracle BPEL Process Manager instance. A merge on the outbound side can then take all 1000 in one group and write them all at once with batch writing


Database rows per Transactions - MaxTransactionSize

Assume that there are 10,000 rows at the start of a polling interval and that maxTransactionSize is 100. In standalone mode, a cursor is used to iteratively read and process 100 rows at a time until all 10,000 have been processed, dividing the work into 10,000 / 100 = 100 sequential transactional units. In a distributed environment, a cursor is also used to read and process the first 100 rows. However, the adapter instance will release the cursor, leaving 9,900 unprocessed rows (or 99 transactional units) for the next polling interval or another adapter instance. For load balancing purposes, it is dangerous to set the maxTransactionSize too low in a distributed environment (where it becomes a speed limit). It is best to set the maxTransactionSize close to the per CPU throughput of the entire business process. This way, load balancing occurs only when you need it


Throttling - RowsPerPollingInterval

DB throttling is the mechanism to control the number of database records processed by the SOA engine in a particular interval through DB Adapter. Throttling also can be used to control the number of records send to the end systems. If the throttling is not defined, the end systems may flood with number of messages that will affect the functioning of the end systems. Throttling parameters should be configured based on the end systems capacity to process the incoming messages. As of Oracle Adapters release 11.1.1.6.0 we can set the inbound DBAdapter property RowsPerPollingInterval to control the throttling. It acts as a limit on the number of records which can be processed in one polling interval. The default value is unlimited. The Patch 12881289 should be applied to enable this for SOA 11.1.1.5.0 and earlier versions.

The maximum rows processed per second are: Number of active nodes in SOA cluster x NumberOfThreads x RowsPerPollingInterval / PollingInterval.


MaxTransactionSize can be thought of as RowsPerDatabaseTransaction or DatabaseFetchSize that is how many records will be fetched to DB Adapter engine from the database for each transaction. It does not affect how many rows can be processed in one polling interval period.


For eg:

If you want to pick only 10 rows at a time i.e., only 10 rows per one instance , you need to give the value 10 for the property called "database rows per XML document"
"database rows per transaction" value should be greater than or equal to the "database rows per XML document" value.
whatever you give the polling frequency, when the database adapter looks for records in the table and it has lets say 100 records to process...and if you give database rows per XML document as 10 and database rows per transaction as 20 this is how it behaves at runtime...
first the database adapter gets 20 rows from database and now 2 instances will be created with 10 records in each...
then the next 20 records will be retrieved and creates 2 instances with 10 records each....
and this process continues until all records are processed with the appropriate read value...


In addition to all these properties , there is one more property called MaxRows.

We can use this property to handle the maximum number of rows to be picked up at the JDBC source level.
eg. If we have 10 records in a table. The DB poller query is matching all 10 records , normally it should pick all 10 records but if we set maxRows to some value 'n', only those n records will be picked and processed.

Hope this helps..

Please write to us in case you need any more clarifications.

Saturday, April 19, 2014

Scheduler process using Bpel DB Adapter

Every now and then , we need a scheduler process to initiate the work flow.

There are different types of schedulers available in the market and different kind of implementations.

But there is one simple way to have a scheduler process , all we need to use is the Bpel DB adapter.


Create a table , with two columns , one POLL_ENABLE_FLAG and one POLL_TIME
A simple table with just columns.  Insert just only record  with values a (Y, Date)

Now create a DB poller to poll this table based on the flag and perform the Logical delete.

As part of LOGICAL DELETE , update the records in the table to same value.

Provide the values like polling frequency and other details. Make sure to mention the

Data base Rows/ XML Document as 1.

In case it is clustered environment , enable Distributed polling.

So now for every polling cycle, the record will be picked and the process can initiate the work flow.

Hope this post helps you...Please write to us in case you need more clarifications



Friday, April 18, 2014

Need help ??? Mail us @ soahaters@gmail.com

We take up online trainings on Oracle SOA, BPEL, OSB, Tibco,....

Also we take up free lancing projects on these technologies and resolve your day-to-day technical glitches.

All you need to do is mail us at ::: soahaters@gmail.com   and we will contact you immediately and resolve your issues....Try now !!!

Wednesday, April 16, 2014

Installation of Oracle Application Integration Architecture(AIA) PS3 on Windows machines

The following articles guides you through a step/step process on AIA installation.

Prerequisites for AIA Installation:
  1. Repository Creation to host SOA Schemas(11.1.1.4 version of RCU to be used)
  2. Oracle Weblogic Server 10.3.4 Installation
  3. Oracle SOA Suite Product on top of the Weblogic Server Install
  4. Oracle Weblogic Domain Configuration with Oracle SOA Suite 11.1.1.4 aka. PS3


Application Integration Architecture AIA PS3 Installation on Windows:

Once the pre-requisites are done,  Before running the installer, we need to configure some basic requirements.

1. Enable Remote JDBC Connection in Weblogic Server: 

 --Go to MiddlewareHome-UserProjects-Domains-YourDomain-Bin-SetDomainEnv.cmd
 --Search for the string: =false"
             WLS_JDBC_REMOTE_ENABLED="-Dweblogic.jdbc.remoteEnabled
 -- Set it to true.
 -- Restart the Admin Server.

2. Increase the Memory Settings:

 -- Go to SOA_HOME-bin-ant-sca-compile.xml
 -- Increase the following memory settings so that you don't run into 'Out of Memory' issues while 
    the installer compiles the AIA composite applications.

3. Set up the TimeZone for the server
 -- Open Weblogic Admin Server Console--Environment--servers-AdminServer-Server Start tab 
 -- In the ARGUMENTS field, enter this property -Duser.timezone=TZone where TZone can be abbreviation of timezones like UTC, GMT, MST etc. It can also be of the Continent/City or format "+5:30"
Note: I have configured the SOA domain with All-IN-ONE AdminServer.If you have configured a domain containing Admin+SOA, then you have to configure this property to the SOAServer 

4.Ensure Correct Settings For Node Manager:
 -- Go to MiddlewareHome -- wlserver_10.3--common--nodemanager
 -- Open nodemanager.properties file and check the StartScriptEnabled property
 -- Set it to true.
 -- Start the node manager. using the StartNodeManager.cmd

5. Ensure that NodeManager is Reachable:
 -- Open up weblogic console -- Environment--Machines--MachineNameToWhichYouHaveConfiguredNodeManager--NodeManager--Monitoring and make sure that its status is "Reachable"

Once you are done with the above we are good to go about the installation.


1: Launch the installer
2: We will start with the installation. Click on Run

3: Specify the JDK home. Use the one that is inside the MiddlewareHome folder

4: Click on Next
  
5: Click Next,  once done with the Prerequisites check.

6: Specify the AIAHOME name. It is a logical name for your installation. AIAHOME Path(create it under the middlewarehome) AIAInstance name(This represents the one AIA deployment. You can have n-number of installations.)

7: Specify the SOA server details. 

8: Click on Next 
9: Specify the DB details to configure the AIA Schemas

10: Specify the MDS details

11: Optional step: Depends on your requirement.

12: Click on install button to install the AIA PS3






Passing arguments/Parameters to an XSLT transformation in Bpel/ How to pass a variable value defined in the Bpel and use it in the xslt file

As depicted in my previous post , on how to Assign a variable post xslt transformation , there might be a few better ways to perform the same.

 Consider Student info case explained in the earlier post , I have my student_college in a Bpel.xml as a  preference variable.
If we can manage to bring that variable into my transformation using a means  , i don't need to perform the setText and have another assign activity below , which will be a performance upgrade.

Because what happens as the variable structure grows in the xslt file , the assign next to it will be a variable copy and it will take a considerable amount of time.

Here in this post , we will explain one better way , by which the same can be achieved with a slightly better performance.

Every time we perform the xslt transformation by selecting the input and output source variables , in the background the function used is ora:processXSLT().
The function is more or less similar to assign , only difference is it goes to an XSLT engine. If we take a more close look , the generic way to define or what we can call syntax for this function is

ora:processXSLT('template','input', ‘output’ 'properties'?) . The properties mentioned as fourth parameter to this function is not mandatory.
* Template is the xsl file
*input is the source Node
*Output is the target Node
*properties are what are defined in the bpel.xml
If we want to define some other variable which are not defined in the bpel.xml , we need to create a XSD for the same and then pass those variable.
The xslt engine input and output arguments should of type xsd/wsdl. It cannot take a simple string type.

So create a xsd called parameters.xsd , make it a name value/pair as shown below.

<?xml version="1.0" encoding="windows-1252" ?>
<xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"
            xmlns:p="http://hatesoa.oracle.com/service/sampleSOAbpel"
            xmlns="http://hatesoa.oracle.com/service/bpel/common"
            targetNamespace="http://hatesoa.oracle.com/service/sampleSOAbpel"
            elementFormDefault="qualified">
  <xsd:element name="parameters">
    <xsd:complexType>
      <xsd:sequence>
        <xsd:element name="item" minOccurs="1" maxOccurs="unbounded">
          <xsd:complexType>
            <xsd:sequence>
              <xsd:element name="name" type="xsd:string"/>
              <xsd:element name="value" type="xsd:string"/>
            </xsd:sequence>
          </xsd:complexType>
        </xsd:element>
      </xsd:sequence>
    </xsd:complexType>
  </xsd:element>
  </xsd:schema>

Now go ahead and create a variable of this xsd and pass it as a fourth parameter in the ora:processXSLT function.

<assign name="transformation">
            <bpelx:annotation>
              <bpelx:pattern>transformation</bpelx:pattern>
            </bpelx:annotation>
            <copy>
              <from expression="ora:processXSLT('Transform.xsl',bpws:getVariableData('OutputVariable','OutputParameters'), bpws:getVariableData(‘parameters’))"/>
              <to variable="outputVariable" part="payload"/>
            </copy>
          </assign>

**we need to populate the params variable before passing it as parameter
**We must define all the variables we need to use in the xslt using xsl:param function.


Tuesday, April 15, 2014

soa-infra-Versions in SOA 11.1.1.7 - PS 6

Unlike the previous versions, SOA 11.1.1.7 - PS 6 facilitates a way to check the versions of the components that are installed as a part of the soa-infra.

File Name : soa-infra-Versions.xsl

Location : $OH\user_projects\domains\base_domain\servers\soa_server1\logs

Friday, April 11, 2014

How to assign a variable post xslt transformation

In many working scenarios , we come across a situation , where a particular variable need to be populated from two different variables.

In such scenarios , if we are using the BPEL Transformation activity , we need to make sure some dummy value is assigned to the variable in the .xslt file before the next assignment.

Consider a small scenario:

I need to populate  variable updateStudent, which comprises of

{
Student_No
Student_Name
Student_Age
Student_Mobile
Student_Address
Student_College
}

Out of these the i have all the values in my process input variable student_info , except Student_College.
I have the value of Student_College defined as a descriptor property in bpel.xml in 10g or composite.xml in 11g.

You can have a one on one mapping between student_info and updateStudent using an .xslt file.

You assign this value in another assign activity post the transformation.
But before using this we need to make sure we assign some dummy value to Student_College in the transform using setText.

 If we don't , the xslt engine will remove the element from the variable structure , and the next assignment activity will fail.

Dehydration Store Tables used in Oracle BPEL PM

cube_instance - stores instance metadata, eg. instance creation date, current
state, title, process identifier

cube_scope - stores the scope data for an instance ... all the variables
declared in the bpel flow are stored here, as well as some internal objects to
help route logic throughout the flow.

work_item - stores activities created by an instance ... all BPEL activities in
a flow will have a work_item created for it. This work item row contains meta
data for the activity ... current state, label, expiration date (used by wait
activities) ... when the engine needs to be restarted and instances recovered,
pending flows are resumed by inspecting their unfinished work items.

document - stores large XML variables. If a variable gets to be larger than a
specific size (configurable via the largeDocumentThreshold property via the
domain configuration page) then the variable is stored in this table to
alleviate loading/saving time from the cube_scope table.


audit_trail - stores the audit trail for instances. The audit trail viewed
from the console is modelled from an XML document. As the instance is worked
on, each activity writes out events to the audit trail as XML which is compress
ed and stored in a raw column. Querying the audit trail via the API/console wil
l join the raw columns together and uncompress the contents into a single XML do
cument.

audit_details - audit details can be logged via the api ... by default
activities such as assign log the variables as audit details (this behavior can
be set via the auditLevel property on the domain configuration page). Details a
re separated from the audit trail because they tend to be very large in size ...
if the user wishes to view a detail they click a link from the audit trail page
and load the detail separately. There is a threshold value for details too ...
if the size of a detail is larger than a specific value (see auditDetailThreshol
d) then it is place in this table, otherwise it is merged into the audit trail r
ow.

dlv_message - callback messages are stored here. All non-invocation messages
are saved here upon receipt. The delivery layer will then attempt to correlate
the message with the receiving instance. This table only stores the metadata
for a message. (eg. current state, process identifier, receive date).

dlv_message_bin - stores the payload of a callback message. The metadata of a
callback message is kept in the dlv_message table, this table only stores the
payload as a blob. This separation allows the metadata to change frequently
without being impacted by the size of the payload (which is stored here and
never modified).

dlv_subscription - stores delivery subscriptions for an instance. Whenever an i
nstance expects a message from a partner (eg. receive, onMessage) a subscription
is written out for that specific receive activity. Once a delivery message is r
eceived the delivery layer attempts to correlate the message with the intended s
ubscription.

invoke_message - stores invocation messages, messages which will result in the
creation of a instance. This table only stores the metadata for an invocation
message (eg. current state, process identifier, receive date).

invoke_message_bin - stores the payload of an invocation message. Serves the
same purpose the dlv_message_bin table does for dlv_message.

task - stores tasks created for an instance. The TaskManager process keeps its
current state in this table. Upon calling invoking the TaskManager process, a
task object is created, with a title, assignee, status, expiration date, etc...
When updates are made to the TaskManager instance via the console the
underlying task object in the db is changed.

schema_md - (just added via patch delivered to Veerle) contains metadata about
columns defined in the orabpel schema. Use case driving this feature was how
to change the size of a custom_key column for a cube_instance row? Changing
the db schema was simple but the engine code assumed a certain length and
truncated values to match that length to avoid a db error being thrown. Now, co
lumn lengths are defined in this table instead of being specified in the code.
To change a column length, change the column definition in the table, then chang
e the value specified in this table, then restart the server.


Column-by-column description:

table ci_id_range

- next_range (integer) - instance ids in the system are allocated on a block
basis ... once all the ids from a block have been allocated, another block is
fetched, next_range specifies the start of the next block.


table cube_instance

- cikey (integer) - primary key ... foreign key for other tables
- domain_ref (smallint) - domain identifier is encoded as a integer to save
space, can be resolved by joining with domain.domain_ref.
- process_id (varchar) - process id
- revision_tag (varchar) - revision tag
- creation_date (date)
- creator (varchar) - user who created instance ... currently not used
- modify_date (date) - date instance was last modified
- modifier (varchar) - user who last modified instance ... currently not used
- state (integer) - current state of instance, see com.oracle.bpel.client.
IInstanceConstants for values
- priority (integer) - current instance priority (user specified, has no impact
on engine)
- title (varchar) - current instance title (user specified, no engine impact)
- status (varchar) - current status (user specified)
- stage (varchar) - current stage (user specified)
- conversation_id (varchar) - extra identifier associated with instance, eg. if
passed in via WS-Addressing or user specified custom key.
- root_id (varchar) - the conversation id of the instance at the top of the
invocation tree. Suppose A -> B -> C, root( B ) = A, root( C ) = A, parent( B )
= A, parent( C ) = B. This instance, instance at the top of the tree will not
have this set.
- parent_id (varchar) - the conversation id of the parent instance that created
this instance, instance at the top of the tree will not have this set.
- scope_revision (integer) - internal checksum of scope bytes ... used to keep
caches in sync
- scope_csize (integer) - compressed size of instance scope in bytes
- scope_usize (integer) - uncompressed size of instance scope in bytes
- process_guid (varchar) - unique identifier for the process this instance be
longs to ... if changes need to be made for all instances of a process, this col
umn is used to query (eg. stale process).
- process_type (integer) - internal
- metadata (varchar) - user specified


table cube_scope

- cikey (integer) - foreign key
- domain_ref (integer) - domain identifier
- modify_date (date) - date scope last modified
- scope_bin (blob) - scope bytes


table work_item

- cikey (integer) - foreign key
- node_id (varchar) - part of work item composite key, identifier for bpel
activity that this work item created for
- scope_id (varchar) - part of work item composite key, identifier for internal
scope that this work item created for (note this is not the scope declared in
bpel, the engine has an internal scope tree that it creates for each instance,
bpel scopes will map to an internal scope but there will be other internal
scopes that have no mapping to the bpel definition).
- count_id (integer) - part of work item composite key, used to distinguish
between work items created from same activity in the same scope.
- domain_ref (integer) - domain identifier
- creation_date (date)
- creator (varchar) - user who created work item ... currently not used
- modify_date (date) - date work item was last modified
- modifier (varchar) - user who last modified work item ... currently not used
- state (integer) - current state of work item, see com.oracle.bpel.client.
IActivityConstants for values
- transition (integer) - internal use, used by engine for routing logic
- exception (integer) - no longer used
- exp_date (date) - expiration date for this work item; wait, onAlarm
activities are implemented as expiration timers.
- exp_flag (integer) - set if a work item has been called back by the
expiration agent (ie. expired).
- priority (integer) - priority of work item, user specified, no engine impact
- label (varchar) - current label (user specified, no engine impact)
- custom_id (varchar) - custom identifier (user specified, no engine impact)
- comments (varchar) - comment field (user specified, no engine impact)
- reference_id (varchar) -
- idempotent_flag (integer) - internal use
- process_guid (varchar) - unique identifier for the process this work item
belongs to ... if changes need to be made for all instances of a process, this
column is used to query (eg. stale process).


table document

- dockey (varchar) - primary key for document
- cikey (integer) - foreign key
- domain_ref (integer) - domain identifier
- classname (varchar) - no longer used
- bin_csize (integer) - compressed size of document in bytes
- bin_usize (integer) - uncompressed size of document in bytes
- bin (blob) - document bytes
- modify_date (date) - date document was last modified


table audit_trail

- cikey (integer) - foreign key
- domain_ref - domain identifier
- count_id (integer) - many audit trail entries may be made for each instance,
this column is incremented for each entry per instance.
- block (integer) - when the instance is dehydrated, the batched audit trail
entries up to that point are written out ... this block ties together all rows
written out at one time.
- block_csize (integer) - compressed size of block in bytes
- block_usize (integer) - uncompressed size of block in bytes
- log (raw) - block bytes


table audit_details

- cikey (integer) - foreign key
- domain_ref (integer) - domain identifier
- detail_id (integer) - part of composite key, means of identifying particular
detail from the audit trail
- bin_csize (integer) - compressed size of detail in bytes
- bin_usize (integer) - uncompressed size of detail in bytes
- bin (blob) - detail bytes


table dlv_message

- conv_id (varchar) - conversation id (correlation id) for the message...this
value is used to correlate the message to the subscription.
- conv_type (integer) - internal use
- message_guid (varchar) - unique identifier for the message...each message
received by the engine is tagged with a message guid.
- domain_ref (integer) - domain identifier
- process_id (varchar) - identifier for process to deliver the message to
- revision_tag (varchar) - identifier for process revision
- operation_name (varchar) - operation name for callback port.
- receive_date (date) - date message was received by engine
- state (integer) - current state of message ... see com.oracle.bpel.client.
IDeliveryConstants for values
- res_process_guid (varchar) - after the matching subscription is found, the
process guid for the subscription is written out here. - res_subscriber
(varchar) - identifier for matching subscription once found.


table dlv_message_bin

- message_guid (varchar) - unique identifier for message
- domain_ref (integer) - domain identifier
- bin_csize (integer) - compressed size of delivery message payload in bytes
- bin_usize (integer) - uncompressed size of delivery message payload in bytes
- bin (blob) - delivery message payload


table dlv_subscription

- conv_id (varchar) - conversation id for subscription, used to help correlate
received delivery messages.
- conv_type (integer) - internal use
- cikey (integer) - foreign key
- domain_ref (integer) - domain identifier
- process_id (varchar) - process identifier for instance
- revision_tag (varchar) - revision tag for process
- process_guid (varchar) - guid for process this subscription belongs to
- operation_name (varchar) - operation name for subscription (receive,
onMessage operation name).
- subscriber_id (varchar) - the work item composite key that this subscription
is positioned at (ie. the key for the receive, onMessage work item).
- service_name (varchar) - internal use
- subscription_date (date) - date subscription was created
- state (integer) - current state of subscription ... see com.oracle.bpel.
client.IDeliveryConstants for values
- properties (varchar) - additional property settings for subscription


table invoke_message

- conv_id (varchar) - conversation id for message, passed into system so
callbacks can correlate properly.
- message_guid (varchar) - unique identifier for message, generated when
invocation message is received by engine.
- domain_ref (integer) - domain identifier
- process_id (varchar) - identifier for process to deliver the message to
- revision_tag (varchar) - revision tag for process
- operation_name (varchar) - operation name for receive activity
- receive_date (date) - date invocation message was received by engine
- state - current state of invocation message, see com.oracle.bpel.client.
IDeliveryConstants for values
- priority (integer) - priority for invocation message, this value will be used
by the engine dispatching layer to rank messages according to importance ...
lower values mean higher priority ... messages with higher priority are dispatch
ed to threads faster than messages with lower values.
- properties (varchar) - additional property settings for message


table invoke_message_bin

- message_guid (varchar) - unique identifier for message
- domain_ref (integer) - domain identifier
- bin_csize (integer) - compressed size of invocation message payload in bytes
- bin_usize (integer) - uncompressed size of invocation message payload in bytes
- bin (blob) - invocation message bytes


table task

- domain_ref (integer) - domain identifier
- conversation_id (varchar) - conversation id for task instance ... allows task
instance to callback to client
- title (varchar) - current title for task, user specified
- creation_date (date) - date task was created
- creator (varchar) - user who created task
- modify_date (date) - date task was last modified
- modifier (varchar) - user who last modified task
- assignee (varchar) - current assignee of task, user specified, no engine
impact
- status (varchar) - current status, user specified, no engine impact
- expired (integer) - flag is set if task has expired
- exp_date (date) - expiration date for task, expiration actually takes place
on work item in TaskManaged instance, upon expiration task row is updated
- priority (integer) - current task priority, user specified, no engine impact
- template (varchar) - not used
- custom_key (varchar) - user specified custom key
- conclusion (varchar) - user specified conclusion, no engine impact