A Performance Analysis between point to point Web Service call Vs Brokered Web Service call via PI

We once had a requirement to integrate a 3rd party IVR system with SAP ECC. As a standard practice we approached the solution to be brokered , ie interfacing via SAP PI. The interfacing solution agreed upon was a synchronous webservice call. We would be exposing the web service and IVR would be consuming it.

The performance requirement we had was 3 secs. That is the request-response cycle should complete within 3 seconds. This led us to consider a statistical analysis between the following cases:

  • Using PI-AAE -Proxy -- Here we had exposed the Webservice out of PI, and IVR would Call PI webservice. PI will call ABAP proxy on the ECC end using Advance Adapter Engine's local processing capability.
  • Direct ECC- RFC Webservice By-Passing PI -- This case, an RFC was written on the ECC end and exposed directly out of ECC as WebService which IVR would Call.

We tested both the scenarios using SOAP UI and below is the statistics which we got. Whatever cases crossed the SLA of 3 secs is highlighted in yellow.

Analysis

Summany of the analysis:

Summary

Conclusion:

Looking at the above statistics, we find that Direct ECC calls performed better(but not significantly) than being routed through PI AAE. We tested with a small set of data, and having simplest coding, involving one select statement only.  However, with increase of data volume, and with further complex logics involved, this observation / ratio of response times might get changed

Can SAP PI and Seeburger Business Integration Server (BIS) co-exist in an Integration landscape?

SEEBURGER Business Intelligence server (BIS) is one of the most common EDI translators in most SAP Implementation projects.  SAP PI is also typically available in such a landscape being extensively to support various Integration needs.

SEEBURGER BIS has distinct features to support B2B communication/Integration. A few of them are as follows

  • Managed File Transfer
  • Out of the Box/Standardized EDI translation mechanisms
  • Support for secured protocols like AS2, SFTP required for critical and sensitive financial/Customer transactions.
  • Support for High Volume transactions and large message sizes.
  • Out of the box features for Partner/Agreement management.

With that said, BIS has certain limitations in its native integration with SAP systems. The integration mechanisms available are IDOC and File systems.  SAP PI can bridge the gap to produce an efficient implementation.

The following are a few use cases for SAP PI and SEEBURGER BIS to be used together in a landscape

Native communication with SAP systems

In majority of SAP implementations, a common integration strategy used is to avoid creation of custom IDOC types. More so in cases where the number of source fields on the interface is not large and custom logic needs to be implemented to populate such fields.  ABAP Proxy using the XI adapter is an obvious choice in most cases. 

SEEBURGER BIS does not support a proxy like communication. In order to achieve the same a pass-through or a hop interface can be built with SAP PI passing the message to SEEBURGER BIS, without any data transformation. BIS can then transform this message into an acceptable target format and transmit it to the business partner via a secure channel.

BIS supports inbound and outbound HTTP calls. SAP PI can connect to SEEBURGER BIS via a receiver HTTP adapter.

This would reduce the development effort of an SAP system by avoiding creation of custom IDOC types. This should only be done for interfaces in which the message sizes are not too large and the additional delay in message processing due to the hop interface is acceptable. This would avoid failures in PI due to message sizes.

Transmission of data to business partners using secured protocols like AS2 or SFTP  

This is a case in which, message sizes are small and the transformations are easily achievable within SAP PI or is delivered as predefined content within SAP PI e.g. SAP ECC and Experian Integration for Credit Checks. Here the requirement is to deliver files (non-EDI) to business partners using secured mechanisms like SFTP or with encryptions not supported out of the box in SAP PI.

BIS supports an out of the box FTP server. This server has 2 nodes a Central or Internal Node and an external node on the DMZ.  SAP PI can generate a file onto the central BIS node. BIS can then deliver the file via secured mechanisms like SFTP to business partners.

This is a pass-through process in BIS.

These are a few patterns which I implemented in one of my implementation and was able to achieve reduced development times and additional security benefits.. The PI version used was SAP PI 7.11 and the SEEBURGER version used was SEEBURGER BIS 6.3.3

Data Cross Reference Strategies in SAP PI

Data cross reference or value mapping is a critical component of any SAP PI Integration Project.

In very simplistic terms, the above is a technique to map data values from a source produced format to a target acceptable format. Some very common examples are country code conversions, UoM conversion etc.

In one of my implementation project, I had the following use cases.

  • Basic value mapping requirements –

This requirement was to map basic data objects like Country Code, UoM, and Plant to Company Code and Company codes to bank accounts. The data volume for these objects was under 5000 cross references.

  • Master Data Cross reference –

The requirement was to convert Vendor Numbers between SAP and 3rd party systems. The data volume was huge in the range of 30,000 - 50,000 records. This cross reference was not available in ECC, and the only source of the cross reference was MDM. The frequency of use for this cross reference was also high.

  • Master Data Cross reference SNC

The requirement here was to have a cross reference between an SAP Vendor/Plant to SNC business partners. All non-standard SNC interfaces would need the business partner to be populated for successful message processing. The volume for such data was high as well as the frequency.

Multiple options were weighed to determine a cross reference strategy. The driving factors were defined storage location, high performance use as well as ease of maintenance and replication.

The following were the major options which were considered

  • Translate at source - 

The value translation can happen at the source. The converted value needs to be available at source. This occurrence of this scenario is rare. This is  the most optimized option as this avoids any additional overhead to message processing.

  • Lookup at source -  

In case the converted value is available at source, and the source is incapable of passing transformed value as a part of the interface (mostly in case of Standard Idocs, with a strategy to minimize custom exits) another option is to lookup the value from SAP PI mapping programs.

This lookup is done via cross system RFC or SOAP lookup from the PI mapping.

It should be noted that the above could cause performance issues during effecting interface runtime, especially in cases where the frequency and runtime requirements of these interfaces are high.

  • Replicate to PI Cache (Value Mapping Replication) –

In this case the mapping values needs to be replicated to the PI cache from a consolidation system. PI supports out of the box features using ABAP and JAVA proxies to replicate value mapping data to PI value mapping cache from an ABAP based consolidation system. Data can be maintained in the consolidation system.

This mapping can be read from the PI mapping, via out of the box optimized functions.

The best practice on a fairly sized PI system is to have the number of records in the PI cache to be limited to less than 20,000 entries. This option was best suited for basic data objects mapping.

  •    Replicate to PI ABAP Stack –

In case the cross reference data volume is high, the PI ABAP stack can be used to store the cross reference.

This data can be read from the mapping program via local RFC call to the PI ABAP stack.

This is case in which data is not available at source, data storage requirements are high and lack in performance due to cross system look-ups needs to be avoided.

Based on the above options the following flow chart could be defined.

image

Based on the above chart, the scenarios were decoded in the following manner.

  • · Basic value mapping requirements –  

This is case in which the mapping was not available on the source. The volume of data was around 5000 records and the frequency of use was high.  Thus a consolidation system like SAP ECC was chosen to maintain the cross reference; this was then replicated to PI value mapping cache using the standard value mapping replication techniques.

This is resulted in high performance of frequently run interfaces.

  •   Master Data Cross reference –

  This is the use case in which the data volume was high, data was not available at source and performance requirements were high. The value mapping cache was not the option to go with. The data was replicated to the ABAP Stack tables, and a local RFC look-up was done from PI mapping.

This RFC calls were more optimized than cross system RFC/SOAP look-up calls.

As the cross reference was available at MDM, a simple replication interface was built from MDM to PI (using Receiver RFC adapter) to keep the data up to date.

The PI version used here was PI 7.11.

The Code Life Cycle - Managing Custom Code?

In majority of projects that implement SAP PI, developments that require custom adapter modules and java proxies are predominantly less in number. But there are projects in which the most of the developments are based on such custom codes. Also with the single stack concept gaining importance, these developments will gather more storm.

How do you manage your Java code when it comes to SAP PI?

I assume most of us don't really bother. We get our coding done on NWDS, compile the code locally, export the Jar or EJB, load them into the server and Voila, we are done.

But as the code base increases, managing this gets tricky. PI, out of the box doesn't provide you any help either. You can edit your code in ESR but only hope for it to compile on itself. If it could, it would have been wonderful, aint it?

One of the recommended ways is to implement NWDI and have the code centrally maintained. But I personally haven't found it easy configuring NWDI and start hosting my code. Ignoramus? Mea Culpa! (Maybe SAP can indeed host a good how to guide to help us with this?)

With Eclipse integration into ESR, I believe this will open a new dimension of possibilities. Imagine code management via the IDE integrated with a check in - check out version management feature.

I hope this is what SAP is aspiring toward. In any case, I have already posted this on Idea place and will wait to see if anyone picks this up.

Question:

How do you manage your custom Java code? Do you use NWDI extensively for SAP PI projects? Or is there any other way you are achieving this in your SAP PI projects?

Flexible Serialization in SAP PI

Let us start by defining what strict serialization is - Serialization during message processing is a feature by which messages are delivered to the target in the same sequence as generated by the source via a middleware.

The quality of service defined for the same in EOIO (Exactly Once in Order).

Serialized messages are processed in a specified queue both on the middleware and the target system. In case of a processing error on any particular message in the queue, the message queue gets blocked; all subsequent messages go on a wait state, till the error is resolved.

Some implementations have a unique requirement for Flexible serialization. In this case, there is a need of serialization of messages and in case of error the other messages can continue, staying serialized, ignoring the failed one which can be attended manually at a later point of time.

The following are the advantages of flexible serialization.

1.  The message processing remains serialized in majority of error-free cases.

2.  The message processing for an interface does not halt due to the failure in one message on the queue.

3.  An error notification, informs the support team of the failure and the subsequent manual action can be performed by them.

It should be noted, that Flexible Serialization is implemented only in cases where, the percentage forecast for failures in less, and manual re-conciliation is possible.

SAP PI, like most middleware systems does not support this out of the box.  A simple WATCHER report can be written based on QRFC APIs in PI and connected ABAP Based SAP systems to achieve the same.  The purpose of this watcher program would be to move the error message out of the active LUW SMQ2 to a saved LUW SMQ3 for all queues requiring Flexible serialization.

The following would be the details of the watcher operation.

1.  Watcher to Monitor all specified Serialized queues on SMQ2

2.  The Watcher looks for specific failure statues like SYSFAIL or CPIC

3.  Upon determination of an error the watcher checks for the message depth on SMQ2 (running/active LUW). . The business team can define the active queue depth,  for flexible serialization to work

4.  The watcher also looks at the depth on the saved queue or SMQ3. This would be important to prevent message movement if there is a generic issue on the interface, which can impact every flowing message.

5.  The watcher based on the above parameters would then move the message to SMQ3.

6.  Finally an alert notification should be send out to the respective users, specifying the movement.

The following is a selection screen.

Selection Screen

The following is the code snippet for the same.

<!--StartFragment --> TYPE-POOLS: salrt.
* Get all errored inbound queues
TABLES: trfcqin.
SELECT-OPTIONS: s_qname FOR trfcqin-qname,
                s_eqname FOR trfcqin-qname,
                s_qstate FOR trfcqin-qstate,
                s_iqname FOR trfcqin-qname.
PARAMETERS: p_pdf AS CHECKBOX DEFAULT 'X',
            p_qdepth TYPE int4,
            p_sqdep TYPE int4.
DATA: v_err_qs TYPE sy-tabix,
      v_err_qss TYPE string,
      v_err_msg(200) TYPE c,
      v_msg TYPE string,
      v_trfcmess TYPE t100-text,
      v_qstate TYPE trfcqin-qstate,
      o_cont TYPE  REF TO if_swf_cnt_container,
      o_err TYPE REF TO cx_root,
      w_qin TYPE trfcqin,
      w_qstate TYPE trfcqstate,
      w_tid TYPE arfctid,
      t_no_pdf_cont TYPE soli_tab,
      t_qin TYPE TABLE OF trfcqin,
      t_qstate TYPE TABLE OF trfcqstate,
      v_q_deep TYPE sy-index,
      ls_message TYPE zsca_string,
      lt_message TYPE zttca_string,
      lt_qview TYPE TABLE OF  trfcqview,
      ls_qview LIKE LINE OF lt_qview.
*CONSTANTS: C_INT_ID TYPE ZOBJID VALUE 'ECC_BLOCKED_QUEUES'.
START-OF-SELECTION.
*Identify the queues
  CALL FUNCTION 'TRFC_QIN_GET_ERROR_QUEUES'
    EXPORTING
      client      = sy-mandt
      with_qstate = 'X'
    TABLES
      qtable      = t_qin
      qstate      = t_qstate.
  DELETE t_qin WHERE NOT qname IN s_qname.
*  DELETE T_QIN WHERE QNAME IN S_IQNAME.
  IF NOT s_qstate[] IS INITIAL.
    LOOP AT t_qin INTO w_qin.
      CLEAR: v_qstate.
      CALL FUNCTION 'TRFC_QIN_STATE'
        EXPORTING
          qname                   = w_qin-qname
        IMPORTING
          qstate                  = v_qstate
*         QLOCKCNT                =
          qdeep                   = v_q_deep
*         QRESCNT                 =
*         WQNAME                  =
*         ERRMESS                 =
*       EXCEPTIONS
*         INVALID_PARAMETER       = 1
*         OTHERS                  = 2
                .
      IF sy-subrc <> 0.
      ENDIF.
      IF  v_q_deep < p_qdepth.
        DELETE t_qin.
      ELSE.
        CHECK NOT v_qstate IS INITIAL.
        CHECK v_qstate NOT IN s_qstate.
      ENDIF.
    ENDLOOP.
  ENDIF.
*   * Move the first LUW to the saved queue
  LOOP AT t_qin INTO w_qin.
    CLEAR: w_tid.
    CALL FUNCTION 'TRFC_QINS_OVERVIEW'
      EXPORTING
        qname  = w_qin-qname
        client = sy-mandt
      TABLES
        qview  = lt_qview.
    READ TABLE lt_qview INDEX 1 INTO ls_qview.
    IF ls_qview-qdeep < p_sqdep.
      CALL FUNCTION 'TRFC_QIN_GET_FIRST_LUW'
        EXPORTING
          qname                   = w_qin-qname
          client                  = sy-mandt
*     NO_READ_LOCK            = ' '
       IMPORTING
         tid                     = w_tid
*     QSTATE                  =
*     WQNAME                  =
          errmess                 = v_trfcmess
*     FDATE                   =
*     FTIME                   =
*     FQCOUNT                 =
*     SENDER_ID               =
*   TABLES
*     QTABLE                  =
       EXCEPTIONS
         invalid_parameter       = 1
         OTHERS                  = 2
                .
      IF sy-subrc <> 0.
* MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
      ELSEIF w_tid IS NOT INITIAL.
*      IF NOT S_EQNAME[] IS INITIAL AND W_QIN-QNAME IN S_EQNAME.
*        WRITE: / '*', AT 3 W_QIN-QNAME, AT 40 W_TID, AT 70 V_TRFCMESS.
*        CONTINUE.
*      ENDIF.
        SUBMIT rstrfcdk AND RETURN WITH tid = w_tid.
* Activate the QIN scheduler for this queue.
        CALL FUNCTION 'QIWK_SCHEDULER_ACTIVATE'
          EXPORTING
            qname = w_qin-qname.
        WRITE: / w_qin-qname, AT 40 w_tid, AT 70 v_trfcmess.
      ENDIF. 

The above queue movement should be notified to the business user, with the help of any alerting mechanism.

SAP Developer Network SAP Weblogs: SAP Process Integration (PI)