zondag 21 januari 2018

Scanning medical supplies

Patient safety and logistics in a hospital are closely related. Is the correct pacemaker available at the appropriate time for the operation and which heart valve has been used for a specific patient?
In able to quickly see which medical supplies are used in a specific patient or to a recall for a given lot it's important to have a correct administration.
In able to support this, insight is needed in the logistical processes, inventory levels and usage within the hospital. Profource offers an integrated solution within the Oracle e-Business Suite for healthcare focussed on patient safety, traceability, barcode scanning and logical efficiency.

Below a simplified process is shown that we implement at hospitals. Items are received by the supplier in a central repository. They unpack boxes and transport the items to the departments who require the goods. Goods are then placed in specific subinventories (can be sterile or not sterile subinventories) used for the operations.
Usually the first receipt at the central repository is without barcode scanning (we do not register the specific lots and/or serial numbers received).
The receipt on the department however is supported by barcode scanning.








Once the items are registered, they can be used at operations. Some larger hospitals use trays they prepare for operations and make a subinventory transfer from the OR subinventory to the tray subinventory for full traceability.
We also support this transaction using barcode scanning.
When items are used during the operation, we can register the issue directly into the eBusiness Suite again using barcode scanning. Supplies that have medical consequences ("in patient") are interfaced to an EPD.
We also support registration of other goods, like test items, sets that are on loan from a supplier, etc. These may or may not have financial impact and may or may not be send to the EPD depending on the usage.

Barcode scanning

Profource has built a configurable decoder to recognize GS1 and HIBC compliant barcodes. The GS1 barcode for example consists of
  • a prefix (usually 01)
  • the product identifier (14 positions)
  • a lot number starting with 10
  • an expiration date starting with 17
  • a serial number starting with 21
Barcodes can be split across multiple lines as well. In Oracle we register whether items are under lot, tht and/or serial control, so we know what to expect in the barcode. The system automatically expects a second line if the first line did not contain the serial number and the item is under serial control.

The system is very flexible so it can also recognize non-GS1 compliant barcodes as long as they comply to certain rules.

Next to recognizing GS1 barcodes, we also print GS1 compliant barcodes for items that cannot be scanned due to invalid barcodes (not all suppliers are yet GS1 or HIBC compliant).

Receipt

In our solution we are able to configure to print barcode labels on the first receipt if necessary (items that always have non compliant barcodes). The items with the labels are then delivered to the department inventory for the second receipt. 
We make sure that on receipt the label on the item is always correct, so we do not have any issues during issue at the OR itself.

In this custom screen we (1) scan the purchase order number (and if necessary the release), (2) scan the barcode label on the item and (3) shows the open receipts we are expecting from the first receipt.

The system decodes the barcode and retrieves the item in Oracle using the PID which we have linked to the item.
Using the receipt open interfaces we register the receipt immediately on the correct subinventory and location.

Big advantage here is of course that the logistics employee does not have to enter the specific lot or serial number anymore. By scanning the item, the registration is instantly, without any errors or typos.
Another advantage is that the system immediately warns you if an item you place in inventory may expire within a certain period. We register on item level how long it should be available (say 3 months), so any item with an expiration date within that window, would trigger a warning. 

Issue

Item issues can be performed both directly in eBusiness Suite using a customized screen or imported when scanned in an external system. An issue is always linked to a patient number or operation ID and a specialty (configurable in the system).



Issues are directly processed, so we both have the medical registration as the logistics transaction. You can indicate whether the item has been used in the patient or not (for example if you dropped the item).

In the system we support multiple types of issue. The material transaction is the most common, but you can also register an item from a loan set. 
A loan set is a set from a supplier for which you only pay once used (including using some rent once you starting using the set). The solution makes sure the rent and usage is registered so it can be invoiced from the supplier.

One of the advantages of the fully integrated solution is that during scanning the system immediately warns you if the item is not available or expired.

Label printing

The printing solution for the label printing is very flexible. Using lookup codes we can configure the label for each printer type (Intermec, Toshiba, Zebra).
Lookup codes for each element on the label are defined and can be configured for each printer. For example the start label for a specific Intermec would be

<STX>B8;o300,170;f2;c17;h600;w8;i0;d0,60;<ETX>\n\
<STX>R;<ETX>\n\
<STX><ESC>E2<ETX>\n\
<STX><CAN><ETX>

Where for the Zebra it would be

^FD

And the Toshiba uses something like

{PC01;0000,0100,07,07,H,00,B=[]{RC01;

Usually we use 2D barcodes rather than the 1D codes to make sure there is enough space to print both the barcode and the information required (like item description, supplier item number, location, etc).

zondag 12 november 2017

On premise to Cloud integration with Oracle Enterprise Contracts

During several conversion projects towards Oracle Enterprise Contracts we have used the method of integration through the database using UTL_HTTP.
The same method can be used for a full integration between for example Oracle eBusiness Suite on premise and Oracle Contracts Cloud. This article describes the contracts integration we've setup for several of our customers to integrate between Oracle eBusiness Suite on premise and Oracle Cloud Contracts. Note that you could also use any other (Oracle) database to implement the same, but in most of our cases we've used eBS. I hope this article will help to get you started on your own integration with Cloud Contracts as well!

The method consists of a number of webservices we call in a specific order to upload the contract, activate the contract and upload documents (attachments), like the signed contract.

To find information on the webservice itself you check Fusion Enterprise Repository (OER)
http://www.oracle.com/webfolder/technetwork/docs/HTML/oer-redirect.html
You should check under the sales section
https://docs.oracle.com/cloud/farel12/salescs_gs/OESWS/toc.htm
for the Contracts Service
https://docs.oracle.com/cloud/farel12/salescs_gs/OESWS/Contract_Service_ContractService_svc_9.htm#oracle.apps.contracts.coreAuthoring.transaction.transactionService.ContractService

The URL for your WSDL is
https://(CRMDomain,Contract Management)/external-contractmanagement-contractsCoreTransaction/ContractService?WSDL

Something we encode as generic parameters in our service

 g_fs_user           varchar2(200)     default '...;
  g_fs_pswd           varchar2(200)     default '...';
  g_activity_ws       varchar2(200)     default 'https://....oraclecloud.com:443/appCmmnCompActivitiesActivityManagement/ActivityService';
  g_contract_ws_r10   varchar2(200)     default 'https://.../.oraclecloud.com/external-contractmanagement-contractsCoreTransaction/ContractService';

Usually we use lookups in eBS for this purpose. The lookup code in that case is the name of the environment, the description is the URL to the service. This way, post clone, the values will still be correct.

Generic method

In general what we did is
a) Create a global variable that contains the payload for the webservice with replaceable tags
b) Loop through the transactions you like to process
c) Transform, validate the data into what is required
d) Replace the tag in the payload with your value
e) Call the webservice

So for example the contract creation itself is a variable like

  g_contract_header_start varchar2(4000) default '' ||
    '     <typ:contractHeader>
            <tran:OrgId>[OrgId]</tran:OrgId>
            <tran:ContractTypeId>[ContractTypeId]</tran:ContractTypeId>
            <tran:ContractNumber>[ContractNumber]</tran:ContractNumber>
            <tran:StartDate>[StartDate]</tran:StartDate>
            <tran:EndDate>[EndDate]</tran:EndDate>
            <tran:BuyOrSell>[BuyOrSell]</tran:BuyOrSell>
            <tran:CurrencyCode>[CurrencyCode]</tran:CurrencyCode>
            <tran:Cognomen>[Cognomen]</tran:Cognomen>
            <tran:Description>[Description]</tran:Description>
            <tran:LegalEntityId>[LegalEntityId]</tran:LegalEntityId>
            <tran:StsCode>DRAFT</tran:StsCode>
            <tran:WebServiceFlag>true</tran:WebServiceFlag>
            <tran:AgreementEnabledFlag>true</tran:AgreementEnabledFlag>
            <tran:EstimatedAmount currencyCode="[CurrencyCode]">[AgreedAmount]</tran:EstimatedAmount>
            <tran:VersionDescription>[VersionDescription]</tran:VersionDescription>
            '; 

During creation we loop through the contracts we need to create and fetch the necessary values. Now you may notice we need the contract type id for example. This is a value that exists in the cloud environment and not in the eBS environment. So how do we get that?

Get Translation Data

In order to get "translation data" like that, we create a datamodel in the BI environment of cloud that provides us with all necessary internal values and setup.
So this queries the legal entities that have been setup, the contract types, etc. We download this into an XML file and upload it to a table to use for conversions.

To upload the file we place the XML file on the server and upload it to a conversion table using SQL Loader

LOAD DATA
INFILE 'content.dat'
  INTO TABLE xxconv_test
  FIELDS TERMINATED BY '#'
  (
    fname   filler char(80),
    c       LOBFILE(fname CHARACTERSET UTF8) TERMINATED BY EOF
  )

Then we convert the clob into XML and put it in our translation table. We've used the same method for several clients, hence we also use the client to see for which client this transformation was used.


DECLARE
  l_xml XMLTYPE;
  l_clob CLOB;
  
BEGIN
DELETE FROM XXCONV_OKC_XML_ALL WHERE Client = 'CLIENT'; COMMIT;
SELECT C INTO l_clob FROM XXCONV_TEST;
l_xml := xmltype.createxml (l_clob);
  INSERT INTO XXCONV_OKC_XML_ALL (Ids,Client) VALUES (l_xml,'CLIENT');
  COMMIT;
END;
/

Now we have all our data in a table we can use queries like this. This shows all valuesets we have used for the flexfields for example, since we also want to validate the values in the flexfields before uploading.

select flex_value
, description
    from 
    (
      SELECT Flex_Value_Set_Name, Flex_Value,Description
      FROM xxconv_okc_xml_all t
         , XMLTable('/DATA_DS/DFF_VALUESETS'
             passing t.IDS
             columns 
               FLEX_VALUE_SET_NAME       varchar2(240)     path 'FLEX_VALUE_SET_NAME'
             , FLEX_VALUE       varchar2(240)  path 'FLEX_VALUE'
             , DESCRIPTION    varchar2(240)  path 'DESCRIPTION'
           )
           WHERE t.client = G_CONV_CLIENT
    ) pt
    where pt.flex_value_set_name = 'Your_Valueset'
    ;

Step 1: Creating the contract

Now all the pieces are in place we can start creating contracts. So we loop through our transactions and for each we transform, validate the data. The main idea is shown below. You get the internal values and replace the tags in the generic header.

      l_bu := get_bu(...);
      l_contract_header := g_contract_header_start;
      l_contract_header := replace(l_contract_header, '[OrgId]',  l_bu);


To get the business unit we use a query on our transformation data, for example

select bu.ORGANIZATION_ID
    , bu.DEFAULT_LEGAL_CONTEXT_ID
    into l_bu_id
    , x_le_id
    from 
    (
      SELECT ORGANIZATION_ID
      ,      NAME
      ,      DEFAULT_LEGAL_CONTEXT_ID
      FROM xxconv_okc_xml_all t
         , XMLTable('/DATA_DS/BU'
             passing t.IDS
             columns 
               ORGANIZATION_ID    number(18)     path 'ORGANIZATION_ID'
             , NAME               varchar2(240)  path 'NAME'
             , DEFAULT_LEGAL_CONTEXT_ID number(18) path 'DEFAULT_LEGAL_CONTEXT_ID'
           )
    where t.client = G_CONV_CLIENT
    ) bu
    where upper(bu.name) like upper(l_bu_name || '%');

Depending on whether this is a BUY or SELL contract (which can be found in the setup of the contract type which we downloaded in our pre-liminary step), we also add suppliers or customers and their contacts. Or even other sub-parties can be added.

We also add the roles on the contract, like contractmanager, owner, buyer, etc with  their access level (* Note that during our conversion the access role READ did not seem to work).
A contract party could be something like

  g_contract_party_sell varchar2(2000) default ''||          
' <tran:ContractPartyContact> 
                   <tran:ContactRoleCode>CONTRACT_ADMIN</tran:ContactRoleCode> 
                   <tran:ContactId>[ContactId]</tran:ContactId> 
                   <tran:OwnerFlag>[OWNERFLAG]</tran:OwnerFlag> 
                   <tran:AccessLevel>[ACCESSLEVEL]</tran:AccessLevel> 
                 </tran:ContractPartyContact> ';

with its own tags to be replaced. Note that we downloaded the suppliers also first before we migrate the data.

Flexfields have a complexity of their own. Especially context dependent flexfields. So depending on the context (usually determine by the contract type), we add a generic flexfield structure and replace the tags. But we always validate the fields in the flexfields before uploading them.

For example a flexfield with the name of the legal rep.

l_context_iden :=   validate_dff (p_dff=> 'SG_LEGALREP',p_value=> r_cur.legal_rep,p_dff_desc=> 'Legal Rep);
l_contract_header_dff := replace(l_contract_header_dff, '[LegalRep]',l_context_iden);

Once we've build up our payload we call the actual webservice

We set the mapping

    l_ns_map := l_ns_map ||'xmlns:env="http://schemas.xmlsoap.org/soap/envelope/" ';
    l_ns_map := l_ns_map ||'xmlns:wsa="http://www.w3.org/2005/08/addressing" ';
    l_ns_map := l_ns_map ||'xmlns:ns0="http://xmlns.oracle.com/apps/contracts/coreAuthoring/contractService/types/" ';    
    l_ns_map := l_ns_map ||'xmlns:ns2="http://xmlns.oracle.com/apps/contracts/coreAuthoring/contractService/" ';
    l_ns_map := l_ns_map ||'xmlns:ns1="http://xmlns.oracle.com/apps/contracts/coreAuthoring/contractService/types/" ';


Set the wallet

UTL_HTTP.set_wallet(g_ora_wallet, g_ora_wallet_pwsd);

I've described wallets before in
http://pamkoertshuis.blogspot.nl/search/label/Wallet

Then the header and authentication

l_http_request := UTL_HTTP.begin_request(g_contract_ws, 'POST','HTTP/1.1');
UTL_HTTP.SET_AUTHENTICATION(l_http_request, g_fs_user, g_fs_pswd);

We configure the header

    UTL_HTTP.set_header(l_http_request, 'Content-Type', 'text/xml;charset="UTF-8"');
    UTL_HTTP.set_header(l_http_request, 'Content-Length', LENGTH(p_req));
    UTL_HTTP.set_header(l_http_request, 'Transfer-Encoding', 'chunked');
    UTL_HTTP.set_header(l_http_request, 'SOAPAction', 'http://xmlns.oracle.com/apps/contracts/coreAuthoring/contractService/createContract');
 
Then we write the data in chunks

UTL_HTTP.write_text(l_http_request, l_chunkData);

And perform the call

 l_http_response := UTL_HTTP.get_response(l_http_request);

To read back the response we also use a temporary lob.

 dbms_lob.createtemporary(x_clob, FALSE );
    dbms_lob.open( x_clob, dbms_lob.lob_readwrite );
  
   l_info := 'read text';
    begin
      loop
        utl_http.read_text(l_http_response, l_buffer);
        dbms_lob.writeappend(x_clob
                          , length(l_buffer)
                          , l_buffer);
      end loop;

End the response

UTL_HTTP.end_response(l_http_response);

On errors we can subtract the fault string

l_resp_xml := XMLType.createXML(x_clob);
  
      SELECT  extractValue(l_resp_xml, '/env:Envelope/env:Body/env:Fault/faultstring', l_ns_map)
      INTO    l_fault_string 
      FROM    dual;

And finally we save the response in our progress table for reporting purposes.


Step 2: Activating the contract

The second step is to activate the contract IF it should be activated of course (usually depending on start and end date). We do the same stuff as for creating the contract, but just a different operation.

 l_ns_map := l_ns_map ||'xmlns:env="http://schemas.xmlsoap.org/soap/envelope/" ';
    l_ns_map := l_ns_map ||'xmlns:wsa="http://www.w3.org/2005/08/addressing" ';
    l_ns_map := l_ns_map ||'xmlns:ns0="http://xmlns.oracle.com/apps/contracts/coreAuthoring/transaction/transactionService/types/" ';
    l_ns_map := l_ns_map ||'xmlns:ns2="http://xmlns.oracle.com/apps/contracts/coreAuthoring/transaction/transactionService/types/" ';
    l_ns_map := l_ns_map ||'xmlns:ns1="http://xmlns.oracle.com/apps/contracts/coreAuthoring/transaction/transactionService/" ';

And operation

UTL_HTTP.set_header(l_http_request, 'SOAPAction', 'http://xmlns.oracle.com/apps/contracts/coreAuthoring/contractService/updateContractToActive');
   

Step 3: Uploading documents 

The documents may be more complex, depending on where the documents reside. If they are on the server we need to load them into BLOBs before sending. To do that we need to create a directory in DBA_DIRECTORIES to read it.

An attachment payload looks something like this

l_req := '
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:typ="http://xmlns.oracle.com/apps/crmCommon/activities/activityManagementService/types/" xmlns:obj="http://xmlns.oracle.com/apps/crmCommon/objects/objectsService/">
   <soapenv:Header/>
   <soapenv:Body>
      <typ:createAttachment>
         <typ:attachmentRows>
            <obj:EntityName>OKC_CONTRACT_DOCS</obj:EntityName>
            <obj:Pk1Value>[ContractId]</obj:Pk1Value>
            <obj:Pk2Value>[ECM_BUY]</obj:Pk2Value>
            <obj:Pk3Value>[MajorVersion]</obj:Pk3Value>
            <obj:DatatypeCode>FILE</obj:DatatypeCode>
            <obj:FileName>[Filename]</obj:FileName>
            <obj:Description>[Description]</obj:Description>
            <obj:Title>[Title]</obj:Title>
            <obj:UsageType>S</obj:UsageType>
         <obj:DownloadStatus>N</obj:DownloadStatus>
            <obj:CategoryName>OKC_DOCUMENTS_SUPPORTING_DOC</obj:CategoryName>
            <obj:UploadedFileName>[UploadedFileName]</obj:UploadedFileName>
            <obj:UploadedFile>[UploadedFile]</obj:UploadedFile>
         </typ:attachmentRows>
         <typ:commitData>Y</typ:commitData>
      </typ:createAttachment>
   </soapenv:Body>
</soapenv:Envelope>';

Depending on whether it's a buy or sell contract we replace ECM_BUY with ECM_BUY or ECM_SELL.

We get the file from the file server, escape all XML in the filename, description etc. And then open the wallet and do our call again.

 UTL_HTTP.set_header(l_http_request, 'SOAPAction', 'http://xmlns.oracle.com/apps/crmCommon/activities/activityManagementService/createAttachment');
       
To get the file from the server we use something like this. So each file is encoded into base64 before sending.

procedure get_file
  ( p_dir  IN VARCHAR2
  , p_file IN VARCHAR2
  , p_clob IN OUT NOCOPY CLOB
  , x_skip out varchar2
  )
  is
    l_bfile BFILE;
    l_step  PLS_INTEGER := 12000;
  BEGIN
    l_bfile := BFILENAME(p_dir, p_file);
  
    DBMS_LOB.fileopen(l_bfile, DBMS_LOB.file_readonly);
    if dbms_lob.getlength( l_bfile ) > 0
    then
      FOR i IN 0 .. TRUNC((DBMS_LOB.getlength(l_bfile) - 1 )/l_step) LOOP
        p_clob := p_clob || UTL_RAW.cast_to_varchar2(UTL_ENCODE.base64_encode(DBMS_LOB.substr(l_bfile, l_step, i * l_step + 1)));
      END LOOP;
      x_skip := 'N';
    else
      x_skip := 'Y';
    end if;
  
    DBMS_LOB.fileclose(l_bfile);
  
  exception
    when others
    then
      ...
  end get_file;    


Some tips 

In the current release (11 and I also think 12) deliverables were not available in the webservice.

We also used a static transformation table, but of course it's possible that you first need to fetch data before you send up your contract information. In order to do that you would have to call a reporting webservice first, fetch the XML data, place it in your transformation table and then upload your contract.
Depending on how often the data changes this may or may not work. For example, downloading all suppliers/customers before uploading a contract is not what you want to do right? So you make sure this is synchronized in an earlier stage and you focus on the contract itself.

dinsdag 26 september 2017

Integration OPA Cloud and Oracle eBS (12.1.3) iProcurement - part 3 (close window)

In the current release of the Oracle Policy Automation model we were using, we could not close the interview window at the end. You can however navigate to another window using an URL.
So what we did is create a simple HTML page with java script that closes a window and navigate to that.

Add close link in your model

On the close button in the model we change the style and refer to one of the input parameters we load using the LoadRequest (see previous blog).



The parameter (start_url) refers to the htm page that closes our window.

Place the HTM file on a location you can reach

In order to do this in eBS we place this file in $OA_HTML on the applicationserver, fetch the link using the profile APPS_SERVLET_AGENT and concatenate it with our HTM file to pass the link to OPA.

The HTM file contains the following logic

<html>
<head>
<script type="text/javascript">
function closeWP() {
 var Browser = navigator.appName;
 var indexB = Browser.indexOf('Explorer');

 if (indexB > 0) {
    var indexV = navigator.userAgent.indexOf('MSIE') + 5;
    var Version = navigator.userAgent.substring(indexV, indexV + 1);

    if (Version >= 7) {
        window.open('', '_self', '');
        window.close();
    }
    else if (Version == 6) {
        window.opener = null;
        window.close();
    }
    else {
        window.opener = '';
        window.close();
    }

 }
else {
    window.close();
 }
}
</script>
</head>
<body onload="closeWP();">
</body>
</html>

zaterdag 23 september 2017

Integration OPA Cloud and Oracle eBS (12.1.3) iProcurement - part 2

We described earlier some of the initial operations we need to do to integrate OPA with Oracle eBS. Below you see an overview of the rest of the operations. So we have a model for OPA, which we create in the model designer. A model needs to be deployed to be usuable as runtime.
In the model we invoke the GetMetaData to get the data structure and the valuesets we like to use in our interview.




So each time something changes in your data structure or valuesets, you need to do a GetMetaData in the model and deploy the model to the runtime version.

During runtime you have two options
1. Start
2. Resume
These options use their own URLs to start the interview and also invoke other services. The start invokes the LoadRequest, where the resume invokes the GetCheckPoint to resume an existing interview from the point you saved.

LoadRequest
The load request loads initial data from your datamodel, which can be used as input parameters for the interview. When you called the interview we can pass an initial parameter in the URL as well. We used that to indicate the user, responsibility and a unique ID to identify the record in our table. Of course you want to encode those parameters, so we used DBMS_OBFUSCATION_TOOLKIT.Desencrypt to encrypt these parameters into one connection string.
When you then get the LoadRequest, it passes back your initial parameter so you can decrypt it and identify the user, authorization and the record we are creating/updating.

First thing we do then is a fnd_global.apps_initialize, because the user calling our webservice through the Integrated SOA Gateway is a generic user.

Our LoadRequest procedure looks something like

procedure LoadRequest
(
  root                 IN  VARCHAR2
, region               IN  VARCHAR2
, language             IN  VARCHAR2
, timezone             IN  VARCHAR2
, request_context      IN  xxgr_opa_t_RequestContext
, seedDataDescription  IN  xxgr_opa_t_Tables
, loadData             OUT xxgr_opa_t_LoadData
, error_msg            OUT VARCHAR2
, Status               OUT varchar2
)

Here the seedDataDescription contains a list of entities and fields the interview is requesting from us. So you have to check which specific fields the service wants and pass values for those back. You cannot pass more values (service will fail).

We also use a translation for booleans, because Y/N values in the database should be translated to true/false values for the service and return type boolean, where we cannot use booleans in our data structures directly.

What we actually did is draw a sequence number and pass that as parameter on the interview. As soon as we got the LoadRequest back we created our record in the database with that sequence (because we were creating entities using the interview and until we get some feedback back from the interview we do not actually need the record).
During SetCheckpoint and SaveRequest we continue processing our record.

Information we pass on the load is stuff like the name of the user that called the interview, his organization, etc.

Drawback is that we can return an error to the webservice, but OPA cannot handle that. It will not show the error message we send back (just a generic error).


SetCheckPoint
During your interview you can save the data so you can resume later. Those are called checkpoints. The hub call call the SetCheckpoint to save a base64 encoded zip file of the interview. That zip file contains an XML containing variables entered in the interview; so if you want to you could unzip and decode the information and actually store structured data in between save points.

procedure SetCheckpoint
(
  request_context   IN xxgr_opa_t_RequestContext
, checkPointData    IN xxgr_opa_r_CheckPoint_Data
, checkpointId      OUT varchar2
, error_msg         OUT varchar2
, Status                 OUT varchar2
)

So setting a checkpoint is merely saving the blob data given our context (request_context). That contains again the parameter(s) we passed to the initial URL, which is our encrypted key with user information.

In our specific case we would create purchase requisitions from our interviews, so this was the moment we actually created the requisition and related our interview (using a custom table) to it.


SaveRequest
When you are done with your interview you can submit the data using a save request operation.


procedure SaveRequest
(
  root              IN  VARCHAR2
, region            IN  VARCHAR2
, language          IN  VARCHAR2
, timezone          IN  VARCHAR2
, request_context   IN  xxgr_opa_t_RequestContext
, submitData        IN  xxgr_opa_t_submit_data
, attachments       IN  xxgr_opa_t_attachments
, auditReport       IN  xxgr_opa_t_audit_report_list
, updateData        OUT xxgr_opa_t_UpdateData
, error_msg         OUT VARCHAR2
, Status            OUT varchar2
)

This is the most complex operation, because now we get all the structured data in the submitData including attachments and auditreports.

There are some restrictions on attachments in the service. You can restrict the size of files in OPA, but the SOA Gateway also may have its own restrictions. We also had a service bus in between with memory restrictions, so we had a limit of max 40MB on files. But note that on each setcheckpoint it would send any attachments in the base64 encoded zip that you already uploaded. So it's good practice to add your attachments as late as possible in the interview to avoid a lot of data traffic.

The submitData contains two parts
- The input fields
- Request for output fields

The latter is a request after the submit (which you can use in the OPA model) to pass back some information. So we capture the fields that are requested to pass them back later after we are done (for example to pass a requisition number).

 <<OutputFields>>
        FOR i in 1 .. submitData(l_det_ind).submitRow(1).outputfield.count 
        LOOP  
          L_Outputs (submitData(l_det_ind).submitRow(1).outputfield(i).name) := submitData(l_det_ind).submitRow(1).outputfield(i).name;
        END LOOP OutputFields;

Then we loop through all the input fields, validate the input and store the data. Of course we could store the names as indexes in a table, but we still need to identify per field what we want to do with it.

 <<InputFields>>
      FOR i in 1 .. submitData(l_det_ind).submitRow(1).inputfield.count
      LOOP

               IF   submitData(l_det_ind).submitRow(1).inputfield(i).name = 'DESCRIPTION' 
               AND  submitData(l_det_ind).submitRow(1).rowAction = 'update'
               THEN
                  l_description := submitData(l_det_ind).submitRow(1).inputfield(i).data_val;
               END IF;

Note that we can have multiple entities (l_det_ind), with multiple rows and multiple fields. In our specific case we only had one main record, but multiple sub records. So we used submitRow (1) here, but otherwise we would have used  loop.

Attachments

Now we get 2 types of attachments. You get the main report IN the data and a separate attachments parameter.

submitData(l_det_ind).submitRow(1).attachments

contains the attachments on our main entity.

So finally we validate our input, update our record and return a message back. This message can be shown, if you use the LoadAfterSubmit option in OPA. It loads information you can give back, so we can show error and warning messages if necessary.

In our current release you can only submit once. After that you need to close the interview and re-open it to make changes.


GetCheckPoint

Then finally the get check point operation, which is used if you resume an interview. We simply read the base 64 encoded string (zip file) and pass that back to the webservice.


procedure GetCheckpoint
(
  request_context   IN xxgr_opa_t_RequestContext
, checkPointData    OUT xxgr_opa_r_CheckPoint_Data
, error_msg         OUT varchar2
, Status                 OUT varchar2
)

So this is pretty straightforward. The only thing we also do, on all operations, is check whether the user is allowed to do this. For example you cannot do a Load if you aren't logged in right now. And you cannot submit if you were not logged in today.