Migration to SAP PI

Disclaimer: These are all my own views only, and do not represent that of my employer's or anybody else's

I am fortunate enough to get to the vicinity of a couple of migration projects, where we had been replacing some other middleware(eg. eGate , Mercator) with SAP PI. During the process, I have learned that migration requirements pose a different set of challenges than implementation requirements. During implementations, our main focus remains on getting clarity on the requirement, which generally changes, and gets refined in some form or the other, till the point of go live and beyond, and then use the middleware tool's features to best design the interfaces. Thus, best design is driven by the tool's abilities and out of box features provided. Over that custom codes are written to supplement the gaps and fulfill the requirement.

However, when we are talking about migration, the requirement is already known and functional with a different middleware tool. The present design is based on the present available tool's capabilities and best practices. These capabilities and best practices will differ from tool to tool , so, what is best designed on one tool might not be the best design for PI, thus posing frequent challenges to the migrating tool's capability. Thus an as is migration might not be the best possible way to choose, and in most cases might not be possible altogether. These technical challenges will lead to greater complexities, and might lead to a very clumsy situation unless dealt with properly from the very beginning of the project.

Thus approaching a migration project turns out to be a vital parameter for the success of the project. Below are some of the factors which should be taken into account from the Blue Printing phase.:

1.  It is important that a comparative study between the tools is handy.

2.  Study the present requirement and try to classify them (based on the visible PI design). examples might be "Control M" - Signifying interfaces which works along with the scheduler Control M; another example may be "External Web services" , denoting interfaces interacting with external Web services. Below are the few patterns /"pointers to define patterns" that I can think about now:

  • Synchronous / Asynchronous
  • Interfaces working along with other scheduling tools.
  • EDI Interfaces
  • Stateful Requirements / potential ccBPM requirements
  • Special Scheduling requirement
  • Dependency requirements
  • High Volume interfaces
  • High Frequency 
  • Real time requirement
  • Protocol support required which are not provided by PI standard adapters
  • Special security requirements, eg: involves digital certificate / sFTP requirements etc.
  • Publish-Subscribe pattern
  • Business Critical Interfaces
  • Any requirement for High availability for specific set of interfaces
  • Any other patterns specific to the requirement
  • Third Party systems -- in most cases 3rd party systems pose unique(specific to the 3rd party) interface challenges

Next, carry out a high level design and feasibility study of each pattern. Incase adequate time is provided , a deeper look at the individual interfaces is worth. Try to narrow down the classifications into PI design patterns.

In the process, certain limitations of the tool is expected to crop up. There are chances that these limitation will have a certain common solution and should be studied carefully for something which is generic / at least can apply to a group of interfaces as generic-parameterizable solution. This solution might range from --a design pattern, writing custom Java or ABAP Maps/ proxies / Adapter Modules / operating system scripts , creating a map convertor etc. A cost analysis needs to be carried out to gauge the advantages / disadvantages of building the generic solutions - it is more likely that certain generic solutions will pass and will be applicable for a significant mass of the interfaces.

At this point of time, we should have some generic solutions available and some specific problems to be addressed which looks to be much simpler and less clumsy to be dealt with than where we started from. A lengthy Blue print/ requirement analysis / design phase is worth invested!!

PI 7.3 - Adapter User-Defined Message Search

As all of us know one of the interesting feature enabled with PI 7.1 EHP1 dealt with User-Defined Message Search. But since (as mentioned in Note 1247043) PI monitors in NetWeaver Administrator (NWAPI) is not supported for production scenarios most of us could not use this feature any further. With PI 7.1 EHP1 Support Package 6 the user defined search could only be configured in & used for the integration server ABAP client. As all of must be aware of by now the PI 7.3 comes with a JAVA only installation option (AEX). So does this mean we cannot use user defined message serach with AEX. Well the good news is now the user defined message search can be configured and monitored from an AEX installation. The user defined message search has been integrated with the new configuration and monitoring tool (pimon).

In this blog I will show how to configure and monitor user defined message search for an AEX installation (PI Adapter User-Defined Message Search). This holds true for a dual stack installation as well.

Configuration:

Navigate to PI Adapter User-Defined Message Search either via pimon->Configuration and Administration from the directory start page or via NWA -> SOA ->Monitoring -> PI Adapter User-Defined Message Search Configuration

Now click on the new button to create a filter and Provide the values for required fields along with sender and receiver components as shown below. All the values will be available via drop down (search help).

Now we need to create the search criteria for this filter (in the details section). This means we need to enter the fields we want to monitor on a given payload. You can monitor more than one field of a payload at once, only thing you need to create separate search criteria for them. You have an option of either monitoring a payload value (XPATH expression) or a field from the SOAP dynamic header. Please note the XPATH expression must yield a single value.

I am using the following test payload for this blog

Now we need to provide the namespace for the prefix value we have used in our XPATH expression. The only restriction is the prefix value cannot be longer than 8 characters.

Now if needed we can test our filter configuration by using the Test Search Criteria button.

Monitoring:

Now we can process a message which satisfies the filter criteria and navigate to the Message Monitoring section to check for the payload and the filter. In the Message Monitoring->Database we can use the user defined search criteria under Advanced option to provide our filter and corresponding values. Here you can provide more than one search criteria for a payload search which you have defined in the configuration step earlier.

You should see all the message that satisfy the filter condition.

Note: Now you can see a new tab has been added to the messages that satisfy a given filter with the set attribute and their corresponding values as shown below.

The user-defined search is one of the many ways to find messages in the Advanced Adapter Engine using the message monitor without the use of TREX. Optionally the user-defined search is also available for searching messages in the Integration Engine. You can configure it using transaction SXMS_LMS_CONF in a dual stack installation as well.

Top 5 Things to Learn at SAP TechEd 2011 about SAP NetWeaver Process Integration (PI)

In May 2011, SAP NetWeaver PI 7.3 was made generally available after a successful ramp-up earlier this year. SAP TechEd 2011 is a good time to learn about ramp-up experience, new and enhanced features, and our plans going forward – to shape your plans and gain firsthand experience. We have compiled a comprehensive program covering SAP’s integration technology with 12 hours of hands-on sessions and more than 10 hours of lectures (all being offered at least twice in Las Vegas). You can find the full agenda and schedule on the SAP TechEd websites for Las Vegas, Madrid, and Bangalore.

It’s hard to pick the best sessions as this depends on your role and experience with SAP NetWeaver PI (unless you decide to simply attend all). Still, here’s a list of the top 5 things to learn at SAP TechEd 2011 with respective sessions:

1. Gain firsthand experience with the latest release (our hands-on sessions – you’ll get your own Java-only PI instance running in the TechEd cloud!)

  • SAP NetWeaver PI - Integration Flows Deep Dive (PMC164)
  • SAP Netweaver PI - AEX (Improvements & New Features) (PMC266)
  • Build system centric processes using the combined BPM/PI offering (PMC165)
  • Combining SAP NetWeaver BRM and SAP NetWeaver PI (PMC166)
  • The SAP NetWeaver PI Java API (PMC360)

2. Learn about improvements for operating and maintaining SAP NetWeaver PI

  • Monitoring Enhancements for SAP NetWeaver PI (PMC108)
  • Near Zero Downtime Maintenance for SAP NetWeaver PI (ALM220)

3. Prepare your upgrade to SAP NetWeaver PI 7.3

  • SAP NetWeaver PI 7.3: Overview and Feedback from Ramp-up (PMC203)
  • Moving productive landscapes to PI 7.3 single-stack (PMC300)

4. Get up to date on our road to an integrated stack (and other enhancements)

  • SAP NetWeaver PI Roadmap (PMC202)
  • Deep Dive into what is New in SAP NetWeaver PI (PMC201)
  • BPMN for System Integration (PMC106)
  • Running system centric processes on SAP NetWeaver BPM and SAP NetWeaver PI (PMC204)

5. Learn from SAP NetWeaver PI customers and partners (ASUG sessions in Las Vegas; Madrid still in planning phase)

  • SAP NetWeaver Process Integration 7.1 Performance, Scaling, and Security (TEC115)

Moreover, we’ll be around for pod and networking sessions.

Proxy Configuration for PI 7.3 Java only

The purpose of this document to highlight new changes and steps required to configure proxy communication between SAP PI 7.3 (Java only installation) and backend SAP systems like ECC, CRM etc.

PI7.3 is coming with 2 installation options- Java only installation (AEX) and Dual stack (ABAP + Java) installation. In Java only installation i.e AEX we do not have ABAP stack on PI system hence the configuration changes are required in PI as well as Backend systems to enable proxy communication between systems.

In case of PI7.3 dual stack implementation the proxy configuration will remain same as of PI earlier versions.

Steps to be followed in PI system

1. Go to PI NWA using Http://<Hostname>:port/nwa

2. Go to Configuration->JCo RFC Provider

3. Create 2 RFC JCO providers

  • SAPSLDAPI_<PI SID>
  • LCRSAPRFC_<PI SID>

Create SAPSLDAPI_PEX:

Note the Application Server host and Gateway Service.

Here Application system is used to register the program id SAPSLDAPI_PEX.

Reason: In Dual stack we generally give PI ABAP stack details, however for Java only installation we need any ABAP stack details(in landscape) to register the Program ID. I have used for ECC application system that PI is exchanging data.

Perform the similar steps for LCRSAPRFC_PEX

Steps to be followed in Application system

a)     Create 3 RFC destinations  (SM59)

-       SAPSLDAPI (Application system to SLD connection)

-        LCRSAPRFC (Application system to Exchange profile connection)

-       SAP_PROXY_ESR (Application system to ESR connection)

b)    Configure SLDAPICUST and test connections through TA: SLD Check

c)     Test SPROXY transaction

d)    Configure SXMB_ADM transaction for runtime proxy message exchange 

Create RFC destinations

- Create RFC destination SAPSLDAPI through SM59 of type T (Start External Program Using TCP/IP)

The gateway Host and service are blank as in PI NWA (JCo RFC) we have registered the program ID in same application system gateway.

Note: In dual stack generally we give SAP PI ABAP stack details, same details we used while creating JCo RFC Provider.

-       Create RFC destination LCRSAPRFC through SM59 of type T (Start External Program Using TCP/IP) similar to RFC destination SAPSLDAPI and use program ID as LCRSAPRFC_<PI SID>

Note : In case we have more application system connected to PI using SPROXY than we need to create above  RFC destinations in another application system and will pass Gateway host and Gateway service as provided in JCo RFC provider.

-       Test both RFC destinations.

-       Create RFC destination SAP_PROXY_ESR through SM59 of type G (HTTP Connection to External Serv)

Target Host: PI host name

Port: Http port (50XX00) where X X- System number

Long Details:  with PIAPPL<SID> user authorizations

b)    Configure SLDAPICUST and test connections through TA: SLDCheck

The user should have SAP_SLD_CONFIGURATOR role.

Generally we use  SLD_CL_<SID> user

c)      Test  proxy connection with ESR for transaction: SPROXY

Note: Here all the config were working but somehow I am not getting exchange profile values using RFC destination LCRSAPRFC. Hence I had to create RFC destination SAP_PROXY_ESR to make transaction SPROXY work.

d)    Configure SXMB_ADM transaction for runtime proxy message exchange

Create RFC destination <any name> of type G (HTTP Connections to External Server)

Target Host: PI host name

Port: Http port (50XX00) where XX- System number

Path Prefix: /XISOAPAdapter/MessageServlet?ximessage=true

Log on Details: 

Go to Transaction: SXMB_ADM->Integration Engine configuration

Note: For dual stack we generally give ABAP stack RFC destination, since we want to use ICO object we need to connection using SOAP adapter hence the path prefix used for RFC destination is  ( /XISOAPAdapter/MessageServlet?ximessage=true)

Introduction to Process Orchestration

If you analyse common business practices, you will find that they are generally a series of relatively simple activities that interact with each other to accomplish a larger, more complex result. While these smaller processes are the core of day-to-day business activities, many businesses only consider the large and complex aspects of their operations to be "processes". In truth, any chain of business activities that are carried out in order to achieve a specific goal should be considered a business process, regardless of how simple or small. Often costs are reduced, quality is increased and efficiency improved by simplifying and refining even the small processes. Keep in mind that a business process is not restricted to human interaction - it often includes interaction between various software applications as well.

Business Process Management

Business Process Management (BPM) is the way in which you plan, implement, and ultimately improve the processes that run your business. BPM tools such as SAP NetWeaver BPM, help to model and enable the automation of business processes - these tools  offer a clear view into processes through graphical representations, in-depth analysis and testing, process tracking, and continual process improvement. The beauty and power behind BPM is the capability to deliver the right work to the right people at the right time - everything can be automated and monitored. Key Process Indicators (KPI's) can be measured and Service Level Agreements (SLA's) can be enforced with automatic escalation rules - all of this independent of the underlying applications. In other words, a good BPM tool can work seamlessly across all other enterprise applications - including on premise, web, cloud and mobile.

Process Integration

While BPM focuses more on the human interaction of business processes, there are application integration tools such as SAP Process Integration (PI) that bring together various enterprise applications through a unified integration approach. These tools focus specifically on the automated communication between software applications. As an example, you could use SAP PI to facilitate the integration between Oracle Business Suite running Financials and SAP ERP running Human Resources. A well thought out integration product should never be technology or application dependent.

Business Rules Management

A huge problem with most enterprise software is that very often the logical conditions for business logic (business rules) are "hard-coded". What this means is that if the rules ever change, software developers are required to update all of the effected software components. Often complex rules are difficult to comprehend and implement - it can take days or even weeks for the business and developers to finally understand and correctly code the rules. Typically the developers understand the rules differently to what the business has tried to communicate and the change process takes a lot longer than it should. A risky but very common practice is that many business rules are only mapped out in the minds of specific business owners and very badly documented, if at all. If a key business owner leaves the company for any reason, the understanding of those rules go with him/her.

A solution to this is what we call Business Rules Management (BRM). Essentially, all business rules are stored in the business rules engine and enterprise software simply taps into these rules when required. The rules are managed centrally so that when they change, none of your software components need to be recoded. Once again it is very important that a BRM tool (such as SAP BRM) is not application dependent and that all of your software applications can tap into the rules engine and leverage its power.

A good example of a business rule is the calculation of a customer's credit limit at banks. It may sound like a simple rule, but there are many aspects taken into consideration - credit ratings, transaction history, age, level of education, current financial commitments, current income, assets and more.

Process Orchestration

The orchestration of business processes can be a complex task - one has to consider human interaction, application integration, rules management, KPI measurement, SLA monitoring, continuous improvement, user experience, business involvement, exception handling, and much more.

The SAP Process Orchestration bundle includes SAP BPM, SAP PI and SAP BRM. These are enabling tools that pass process ownership on to the business and allow IT to support the technology. All of these tools are completely independent and work across any other applications - this bundle is something that even non-SAP customers should be considering.

SAP has built their BPM and BRM tools in a way that the business can design the processes and rules with graphic modelling. Once a BPM model has been designed, it is simply passed on to the developer who quickly ties up application functionality to the process. The processes and rules that the business designs are the processes and rules that are ultimately executed. No more miscommunications or misunderstandings - simple.

Synchronous calls with the PI SOAP Adapter

I have not worked a lot with synchronous calls and modules. So I had to learn a bit about it.

In July I launched the SAP PIArchiving solution, to make it easier to archive messages in the adapter framework. It could be EDIFact or XML messages that you want to find easily and store safely.

One question I was asked was how the system worked with synchronous calls for instance for a web service. I guess that the adapter would figure it out and it was possible to make it work.

Now I finally had time to test how to put the adapter modules on a SOAP adapter, to archive the messages for both ways. It proved really easy. The only change was that the adapter module should be placed before and after the SAP standard module.

The processing sequence should just look like this.

image

The modules can be configured independent of each other using the module key.

What seems to happen when this occurs is that the two messages are being processed.

image

When looking at the processing logs it is possible to see the connection. In the adapter communication logs the following information exists. There is a link to the continuing processing of a message as well as continuing from this guid. It is thereby possible to find the corresponding message.

image

What I like about this is how the framework works, is that the modules do need to know anything that it is a synchronous message. My archiving modules still works, which is just really nice.

Chem XML Message eStandards and CIDX Scenario Part III

In my earlier blogs efforts are made to explain about CIDX standards, how to design and configure the object to support CIDX communication.

Blog1

Blog2

I would like to make your experience pleasant and fruitful with CIDX communication through this blog.

This blog covers those intricate details in regard to security, certificates through simple steps, focusing on PI 7.1

You have already selected CIDX adapter with Transport Protocol as “HTTPS” and Message Protocol as “RNIF 1.1” for communication. Selecting the message protocol to RNIF 1.1 means you are configuring the scenario to handle Preamble, Service Header, Service Content, Digital Signatures etc…

We will focus on achieving HTTPS communication which is a combination of HTTP with SSL/TLS protocol to provide encrypted communication and secure identification of a network web server.

Step 1: Since the CIDX adapter is available through adapter engine, you need to set your server through Java stack to receive and send secured messages.

SSL Communications are handled by ICM (Internet Communication Manager) for both the Java and ABAP servers. You need to perform the configuration to use one of it, navigate to RZ10, select the profile <SYSID>_DVEBMGS00_<host> and configure the profile parameter.

ssl/pse_provider              = JAVA

Step 2: Restart the server to notice automatic creation of Keystore views in SAP NetWeaver Administration (NWA).

Navigate to NWA >> Configuration Management >> Certificate and Keys.

Identify the new Keystore View named after ICM_SSL_<instance ID>

Create the private key in the specified keystore view using “Create” and follow the wizard.

Notice that “Generate CSR Request” is enabled and use it generate CSR Request. Basically this step is needed to get your certificate issued by 3rd party Authority, to be identified as secure partner to carry out secure online transactions and conduct the business over internet.

When you purchase the certificate that is considered as CSR Response. Select the private key that you have just created and import it as “Import CSR Response”.

Copy these certificates into Trusted CAs and secure_ssl  keystore Views.

Step 3: Load the public key of your partner with entire certificate chain (Public Key, Intermediate and Root) into keystore Views “ICM_SSL_XXXX”, Trusted CAs.

In the following screenshot, you can view Verisign as Certificate Authority and chain of certificates.

They can be recognized as Verisign as root, Verisign Class 3 Secure Server CA – G3 as intermediate and business.partner.com as public key.

At times your partners provide self signed certificates, however PI supports.

Step 4: Choose how you want to enable your partner log into your server to come up with processing a message request.

a) In some cases, the certificate is issued with CN = <user id>, then provide necessary authorizations to the user.

b) In most cases, the certificate is issued after host name for eg., business.partner.com. In this case to support the certificate log in, you need to perform additional settings.

    i. Create a certificate user say PICERTUSER with adequate authorizations (One of it is ‘XI_AF_RECEIVE’).

    ii. Navigate to NWA >> Configuration Management  >> Security >> Authentication

Go to Login module, ClientCertLoginModule.

Edit to maintain Name as “Rule1.getUserFrom” and Value as “wholeCert”.

       iii. Navigate to NWA >> Configuration Management >> Security >> Identity Management.

Display PICERTUSER, Modify to load the partner certificate(Only Public key).

It basically means when message comes from remote server, certificate is authenticated and then accept to login through PICERTUSER.

Internally the message is checked whether valid or not by comparing the certificate Authority in ICM_SSL_XXX and partner through TrustedCAs. If it exists, it passes and then next part of steps is to select the matching user with the certificate.

    iv. As an additional optional step, you may want to restrict the processing of scenario to this user through Business component >>Assigned Users in Configuration.

Step 5: Follow my previous blog 2 for configuration.

Wait for another blog that focuses on Troubleshooting CIDX communication.

For additional help use SAP resources

  1. Maintaining the User’s Certificate Information

http://help.sap.com/saphelp_nwpi711/helpdata/en/a7/1cd08ffe25e34799cbbe1a7ecdb8ed/frameset.htm

     3. You may use diagtool provided by SAP, more details are available in note#1045019 – Web Diagtool for collecting traces. This is a very good tool that provides you the visibility on how the message is being processed.

Handling patterns and manipulating hierarchies through XSLT

XSLT can handle complex and tricky requirements specially when copying patterns and manipulating hierarchies. I came across a tricky requirement in my project. It was an IDoc to File scenario where source IDoc xml was to be converted into a third party specific format.

Here we have an invoice Idoc (INVOIC.INVOIC02) which needs to be converted into a specific format. This transformation is required for a correction invoice Idoc, which means that Idoc would have even number of E1EDP01 segments in it. For n number of original E1EDP01 segments, there will be another n number of correction E1EDP01 segments.  Source structure of the idoc is as follows:

             IDOC

                              EDI_DC40

                              E1EDKxx segments

                              E1EDP01 segments (Even number, n original, n correction)

                              E1EDSxx segments

Third party requires the Idoc xml but in a bit different format  as explained below, Let us introduce some notations:

E1EDKxx_Original  - These are E1EDKxx segments which are present in the source idoc.So lets call them the E1EDKxx_Original segments.

E1EDKxx_Changed - This is modified version of source E1EDKxx segments. One of the   segment in E1EDKxx segment would be changed.

Trailer      - Copy of E1EDSxx, this would be used as trailer record

Target structure required by third party --

            IDOC

                             EDI_DC40           

                             E1EDKxx_Original  (As is original E1EDKxx).

                             E1EDP01 (item number 1, original item)

                             E1EDSxx (Trailer)

                             E1EDKxx_Changed

                             E1EDP01 (item number 1, correction item)

                             E1EDSxx (Trailer)

                             E1EDKxx_Original  (As is original E1EDKxx)

                             E1EDP01 (item number 2, original item)

                             E1EDSxx (Trailer)

                             E1EDKxx_Changed

                             E1EDP01 (item number 2, correction item)

                             E1EDSxx (Trailer)

                            ……………………………..

                            ……………………………..

We have a relation here between original and correction item (E1EDP01).

Original E1EDP01-POSEX = Correction E1EDP01-HIPOS.

Given the above information, we need to convert the source IDoc xml into target structure as explained above. Achieving this transformation via Graphical, Java or ABAP mapping would be quite difficult. So we will be discussing XSLT transformation here.

We would approach the target xml as mentioned below:

      Loop on all E1EDP01

            If ( Current E1EDP01-POSEX = Any of other E1EDP01-HIPOS)

                         write E1EDKxx_Changed;

                         write current E1EDP01;(original item)

                         write trailer;

                         write E1EDKxx_Original;

                         write correction E1EDP01; (Correction Item)

                         write trailer;

            End If.

     Endloop

First we have to copy the mulitple E1EDKxx segments in a variable. We will do that using an XSLT function - starts-with(name(), 'E1EDK') in a loop.

        <!--Copying all the E1EDKxx segments from the idoc, these will work as header for original E1EDP01 -->
        <xsl:variable name="e1edkxx_original">
            <xsl:for-each select="//IDOC/*">
                <!--Copying the E1EDKxx segments as they are -->
                <xsl:if test="starts-with(name(), 'E1EDK')">
                <xsl:copy-of select="."/>
                </xsl:if>
            </xsl:for-each>
        </xsl:variable>

Note: For checking every segment name inside IDOC, we need to run a loop on * (<xsl:for-each select="//IDOC/*"> )

Now we need E1EDKxx segment same as in the source idoc but with a change E1EDK14-ORGID = 'G2O' (where qualifier = '015'). We will store this data in another variable e1edkxx_changed.

                         <!-- Copying all the E1EDKxx segments from the idoc with a change E1EDK14-ORGID = 'G2O', these will work as header for corrected E1EDP01 -->
        <xsl:variable name="e1edkxx_changed">
            <xsl:for-each select="//IDOC/*">
                <!--Copying other E1EDKxx segments as they are -->
                <xsl:if test="not (starts-with(name(), 'E1EDK14') and QUALF = '015') and (starts-with(name(), 'E1EDK'))">
                    <xsl:copy-of select="."/>
                </xsl:if>
                <!--Changing E1EDK14-ORGID where qualifier = '015' -->
                <xsl:if test="starts-with(name(), 'E1EDK14') and QUALF = '015'">
                    <E1EDK14>
                        <xsl:attribute name="SEGMENT"><xsl:value-of select="1"/></xsl:attribute>
                        <QUALF>015</QUALF>
                        <ORGID>
                            <xsl:value-of select="'G2O'"/>
                        </ORGID>
                    </E1EDK14>
                </xsl:if>
            </xsl:for-each>
        </xsl:variable>

We will store all the E1EDP01 segments in a variable all_e1edp01.

                <!--Copying the E1EDPxx segments as they are -->
        <xsl:variable name="all_e1edp01">
            <xsl:for-each select="//IDOC/*">
                <!--Copying the E1EDKxx segments as they are -->
                <xsl:if test="starts-with(name(), 'E1EDP') ">
                    <xsl:copy-of select="."/>
                </xsl:if>
            </xsl:for-each>
        </xsl:variable>

E1EDSxx segments would be used as trailer. A variable trailer will hold all the E1EDSxx segments.

        <xsl:variable name="trailer">
            <!--Trailer Record E1EDSxx-->
            <xsl:for-each select="//IDOC/*">
                <!--Copying the E1EDSxx segments as they are -->
                <xsl:if test="starts-with(name(), 'E1EDS') ">
                    <xsl:copy-of select="."/>
                </xsl:if>
            </xsl:for-each>
            <!--    Finishing the trailer part here-->
        </xsl:variable>

We now have E1EDKxx_original, E1EDKxx_changed, all_e1edp01 and trailer records ready. Other requirement remains for relating the original E1EDP01 segments and correction E1EDP01 segments. We have been given following condition for relating original  E1EDP01 segments and correction E1EDP01 segments.

HIPOS of correction E1EDP01 segments = POSEX of original E1EDP01 segments

This can be done using two for loops. But we will be using an XSLT key() function. The key() function returns a node-set from the document, using the index specified by an <xsl:key> element.

For using the key function, we first need to declare a definition of key function in the beginning of XSLT code.

     <xsl:stylesheet version="2.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:ns0="urn:test:xi:Sales:100">
    <xsl:output method="xml" version="1.0" encoding="UTF-8" indent="yes"/>
    <!--Defining key for HIPOS, HIPOS of corrected E1EDP01 = POSEX of original E1EDP01 -->
    <xsl:key name="E1EDPXX_HIPOS" match="E1EDP01" use="HIPOS"/>

Declaration of key in the beginning of XSLT, informs the processor that we would be using key() function somewhere in the code.Now we just need to run a loop on POSEX of the E1EDP01 and compare the key function output if:

E1EDP01-HIPOS from Key Function = Current E1EDP01-POSEX.

And then place these segments in the required format using copy-of function.

           <!-- Forming the required structure here-->
            <IDOC>
                <xsl:copy-of select="$controlRecord"/>    <!-- Control Record -->
                <xsl:for-each select="$all_e1edp01/*">     <!-- Manupulating the E1EDPxx segments here -->
                    <!-- Checking if the value of Posex in the current E1EDP01 segment is equal to the HIPOS of any of E1EDP01 segment  -->
                    <!-- Assumption is that HIPOS is populated only in case of corrected E1EDP01 items and is equal to POSEX of the original E1EDP01 -->
                    <xsl:if test="POSEX = key('E1EDPXX_HIPOS', POSEX)/HIPOS ">
                        <xsl:copy-of select="$e1edkxx_changed"/>    <!-- Copying E1EDKxx Changed  -->
                        <xsl:copy-of select="."/>                    <!-- Copying E1EDP01 segment (original)-->                   
                        <xsl:copy-of select="$trailer"/>                <!--Copying E1EDSxx Segment-->
                        <xsl:copy-of select="$e1edkxx_original"/>     <!-- Copying E1EDKxx Original-->
                        <xsl:copy-of select="key('E1EDPXX_HIPOS', POSEX)"/>    <!-- Copying E1EDP01 segment for correction-->
                        <xsl:copy-of select="$trailer"/>
                    </xsl:if>
                </xsl:for-each>
                </IDOC>

Here is the complete code which will transform the IDoc into required format.

       <?xml version="1.0" encoding="UTF-8"?>
<!-- This XSLT converts invoice IDoc into a required format as requested by third party.
    In case of correction invoice, Invoice IDoc would have even number of E1EDP01 (Item) segments since for each correction there will be an original E1EDP01   
      segment. Apart from this there will be standard header E1EDKxx segments and E1EDSxx segments in the INVOIC02 IDoc.
    The E1EDKxx segments are treated as header segment for the corrected E1EDP01 segments.
    The same E1EDKxx segments would be repeated for origin items E1EDP01 with a slight change that ORGID will be set as G2O for E1EDK14 segment where qualifier = 015.
    The original IDoc structure received from SAP ECC in case of correction invoice is :
    IDOC
        EDI_DC40
        E1EDKxx segments
        (will be used as header for corrected E1EDP01 segment, and will be used as header (with a slight change in E1EDK14 segment) for original E1EDP01
        E1EDPxx segments ( even number of E1EDPxx segments)
        E1EDSxx segments ( will be used as trailer record in the output)
*****************************************************************************************************************************************************************************************
Structure as required by Third Party
        IDOC
            EDI_DC40
            E1EDKxx_changed
            E1EDP01 (item number 1, original item)
            E1EDSxx ( trailer)
            E1EDKxx_original
            E1EDP01 (item number 1, corrected item)
            E1EDSxx ( trailer)
            E1EDKxx_changed
            E1EDP01 (item number 2, original item)
            E1EDSxx ( trailer)
            E1EDKxx_original
            E1EDP01 (item number 2, corrected item) PSTYV=ZL2N
            E1EDSxx ( trailer)
            ................................................
            ................................................
    Original E1EDP01 segments would be related to corrected E1EDP01 segments by E1EDP01-POSEX and E1EDP01-HIPOS.
    POSEX of original E1EDP01 = HIPOS of corrected E1EDP01. HIPOS is populated by custom coding in ECC only for correction line items.
    This XSLT does the required transformation.
-->
<xsl:stylesheet version="2.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:ns0="urn:test:xi:Sales:100">
    <xsl:output method="xml" version="1.0" encoding="UTF-8" indent="yes"/>
    <!--Defining key for HIPOS, HIPOS of corrected E1EDP01 = POSEX of original E1EDP01 -->
    <xsl:key name="E1EDPXX_HIPOS" match="E1EDP01" use="HIPOS"/>
    <xsl:template match="/">
                <!--Copying the EDI_DC40 segment -->
        <xsl:variable name="controlRecord" select="//IDOC/EDI_DC40"/>
        <!--Copying all the E1EDKxx segments from the idoc, these will work as header for original E1EDP01 -->
        <xsl:variable name="e1edkxx_original">
            <xsl:for-each select="//IDOC/*">
                <!--Copying the E1EDKxx segments as they are -->
                <xsl:if test="starts-with(name(), 'E1EDK')">
                <xsl:copy-of select="."/>
                </xsl:if>
            </xsl:for-each>
        </xsl:variable>
        <!-- Copying all the E1EDKxx segments from the idoc with a change E1EDK14-ORGID = 'G2O', these will work as header for corrected E1EDP01 -->
        <xsl:variable name="e1edkxx_changed">
            <xsl:for-each select="//IDOC/*">
                <!--Copying other E1EDKxx segments as they are -->
                <xsl:if test="not (starts-with(name(), 'E1EDK14') and QUALF = '015') and (starts-with(name(), 'E1EDK'))">
                    <xsl:copy-of select="."/>
                </xsl:if>
                <!--Changing E1EDK14-ORGID where qualifier = '015' -->
                <xsl:if test="starts-with(name(), 'E1EDK14') and QUALF = '015'">
                    <E1EDK14>
                        <xsl:attribute name="SEGMENT"><xsl:value-of select="1"/></xsl:attribute>
                        <QUALF>015</QUALF>
                        <ORGID>
                            <xsl:value-of select="'G2O'"/>
                        </ORGID>
                    </E1EDK14>
                </xsl:if>
            </xsl:for-each>
        </xsl:variable>
        <xsl:variable name="trailer">
            <!--Trailer Record E1EDSxx-->
            <xsl:for-each select="//IDOC/*">
                <!--Copying the E1EDSxx segments as they are -->
                <xsl:if test="starts-with(name(), 'E1EDS') ">
                    <xsl:copy-of select="."/>
                </xsl:if>
            </xsl:for-each>
            <!--    Finishing the trailer part here-->
        </xsl:variable>
        <!--Copying the E1EDPxx segments as they are -->
        <xsl:variable name="all_e1edp01">
            <xsl:for-each select="//IDOC/*">
                <!--Copying the E1EDKxx segments as they are -->
                <xsl:if test="starts-with(name(), 'E1EDP') ">
                    <xsl:copy-of select="."/>
                </xsl:if>
            </xsl:for-each>
        </xsl:variable>
        <!-- Forming the required structure here-->
            <IDOC>
                <xsl:copy-of select="$controlRecord"/>    <!-- Control Record -->
                <xsl:for-each select="$all_e1edp01/*">     <!-- Manupulating the E1EDPxx segments here -->
                    <!-- Checking if the value of Posex in the current E1EDP01 segment is equal to the HIPOS of any of E1EDP01 segment  -->
                    <!-- Assumption is that HIPOS is populated only in case of corrected E1EDP01 items and is equal to POSEX of the original E1EDP01 -->
                    <xsl:if test="POSEX = key('E1EDPXX_HIPOS', POSEX)/HIPOS ">
                        <xsl:copy-of select="$e1edkxx_changed"/>    <!-- Copying E1EDKxx Changed  -->
                        <xsl:copy-of select="."/>                    <!-- Copying E1EDP01 segment (original)-->                   
                        <xsl:copy-of select="$trailer"/>                <!--Copying E1EDSxx Segment-->
                        <xsl:copy-of select="$e1edkxx_original"/>     <!-- Copying E1EDKxx Original-->
                        <xsl:copy-of select="key('E1EDPXX_HIPOS', POSEX)"/>    <!-- Copying E1EDP01 segment for correction-->
                        <xsl:copy-of select="$trailer"/>
                    </xsl:if>
                </xsl:for-each>
                </IDOC>
    </xsl:template>
</xsl:stylesheet>

The source file in Altova

Source IDoc data in Altova

On executing XSLT, target file is generated

Target xml data in Altova

Important Points to Note:

1) For running a loop on IDoc data, we choose //IDoc/* since we have to copy multiple segments.

2) Use of xslt function starts-with(name(), 'Pattern') to copy data.

3) Copying the required segments in variables and then putting them in desired position.

4) Using Key Function to relate two segments. Key function is quite efficient when dealing with large XSLT.

A Performance Analysis between point to point Web Service call Vs Brokered Web Service call via PI

We once had a requirement to integrate a 3rd party IVR system with SAP ECC. As a standard practice we approached the solution to be brokered , ie interfacing via SAP PI. The interfacing solution agreed upon was a synchronous webservice call. We would be exposing the web service and IVR would be consuming it.

The performance requirement we had was 3 secs. That is the request-response cycle should complete within 3 seconds. This led us to consider a statistical analysis between the following cases:

  • Using PI-AAE -Proxy -- Here we had exposed the Webservice out of PI, and IVR would Call PI webservice. PI will call ABAP proxy on the ECC end using Advance Adapter Engine's local processing capability.
  • Direct ECC- RFC Webservice By-Passing PI -- This case, an RFC was written on the ECC end and exposed directly out of ECC as WebService which IVR would Call.

We tested both the scenarios using SOAP UI and below is the statistics which we got. Whatever cases crossed the SLA of 3 secs is highlighted in yellow.

Analysis

Summany of the analysis:

Summary

Conclusion:

Looking at the above statistics, we find that Direct ECC calls performed better(but not significantly) than being routed through PI AAE. We tested with a small set of data, and having simplest coding, involving one select statement only.  However, with increase of data volume, and with further complex logics involved, this observation / ratio of response times might get changed

Can SAP PI and Seeburger Business Integration Server (BIS) co-exist in an Integration landscape?

SEEBURGER Business Intelligence server (BIS) is one of the most common EDI translators in most SAP Implementation projects.  SAP PI is also typically available in such a landscape being extensively to support various Integration needs.

SEEBURGER BIS has distinct features to support B2B communication/Integration. A few of them are as follows

  • Managed File Transfer
  • Out of the Box/Standardized EDI translation mechanisms
  • Support for secured protocols like AS2, SFTP required for critical and sensitive financial/Customer transactions.
  • Support for High Volume transactions and large message sizes.
  • Out of the box features for Partner/Agreement management.

With that said, BIS has certain limitations in its native integration with SAP systems. The integration mechanisms available are IDOC and File systems.  SAP PI can bridge the gap to produce an efficient implementation.

The following are a few use cases for SAP PI and SEEBURGER BIS to be used together in a landscape

Native communication with SAP systems

In majority of SAP implementations, a common integration strategy used is to avoid creation of custom IDOC types. More so in cases where the number of source fields on the interface is not large and custom logic needs to be implemented to populate such fields.  ABAP Proxy using the XI adapter is an obvious choice in most cases. 

SEEBURGER BIS does not support a proxy like communication. In order to achieve the same a pass-through or a hop interface can be built with SAP PI passing the message to SEEBURGER BIS, without any data transformation. BIS can then transform this message into an acceptable target format and transmit it to the business partner via a secure channel.

BIS supports inbound and outbound HTTP calls. SAP PI can connect to SEEBURGER BIS via a receiver HTTP adapter.

This would reduce the development effort of an SAP system by avoiding creation of custom IDOC types. This should only be done for interfaces in which the message sizes are not too large and the additional delay in message processing due to the hop interface is acceptable. This would avoid failures in PI due to message sizes.

Transmission of data to business partners using secured protocols like AS2 or SFTP  

This is a case in which, message sizes are small and the transformations are easily achievable within SAP PI or is delivered as predefined content within SAP PI e.g. SAP ECC and Experian Integration for Credit Checks. Here the requirement is to deliver files (non-EDI) to business partners using secured mechanisms like SFTP or with encryptions not supported out of the box in SAP PI.

BIS supports an out of the box FTP server. This server has 2 nodes a Central or Internal Node and an external node on the DMZ.  SAP PI can generate a file onto the central BIS node. BIS can then deliver the file via secured mechanisms like SFTP to business partners.

This is a pass-through process in BIS.

These are a few patterns which I implemented in one of my implementation and was able to achieve reduced development times and additional security benefits.. The PI version used was SAP PI 7.11 and the SEEBURGER version used was SEEBURGER BIS 6.3.3

Data Cross Reference Strategies in SAP PI

Data cross reference or value mapping is a critical component of any SAP PI Integration Project.

In very simplistic terms, the above is a technique to map data values from a source produced format to a target acceptable format. Some very common examples are country code conversions, UoM conversion etc.

In one of my implementation project, I had the following use cases.

  • Basic value mapping requirements –

This requirement was to map basic data objects like Country Code, UoM, and Plant to Company Code and Company codes to bank accounts. The data volume for these objects was under 5000 cross references.

  • Master Data Cross reference –

The requirement was to convert Vendor Numbers between SAP and 3rd party systems. The data volume was huge in the range of 30,000 - 50,000 records. This cross reference was not available in ECC, and the only source of the cross reference was MDM. The frequency of use for this cross reference was also high.

  • Master Data Cross reference SNC

The requirement here was to have a cross reference between an SAP Vendor/Plant to SNC business partners. All non-standard SNC interfaces would need the business partner to be populated for successful message processing. The volume for such data was high as well as the frequency.

Multiple options were weighed to determine a cross reference strategy. The driving factors were defined storage location, high performance use as well as ease of maintenance and replication.

The following were the major options which were considered

  • Translate at source - 

The value translation can happen at the source. The converted value needs to be available at source. This occurrence of this scenario is rare. This is  the most optimized option as this avoids any additional overhead to message processing.

  • Lookup at source -  

In case the converted value is available at source, and the source is incapable of passing transformed value as a part of the interface (mostly in case of Standard Idocs, with a strategy to minimize custom exits) another option is to lookup the value from SAP PI mapping programs.

This lookup is done via cross system RFC or SOAP lookup from the PI mapping.

It should be noted that the above could cause performance issues during effecting interface runtime, especially in cases where the frequency and runtime requirements of these interfaces are high.

  • Replicate to PI Cache (Value Mapping Replication) –

In this case the mapping values needs to be replicated to the PI cache from a consolidation system. PI supports out of the box features using ABAP and JAVA proxies to replicate value mapping data to PI value mapping cache from an ABAP based consolidation system. Data can be maintained in the consolidation system.

This mapping can be read from the PI mapping, via out of the box optimized functions.

The best practice on a fairly sized PI system is to have the number of records in the PI cache to be limited to less than 20,000 entries. This option was best suited for basic data objects mapping.

  •    Replicate to PI ABAP Stack –

In case the cross reference data volume is high, the PI ABAP stack can be used to store the cross reference.

This data can be read from the mapping program via local RFC call to the PI ABAP stack.

This is case in which data is not available at source, data storage requirements are high and lack in performance due to cross system look-ups needs to be avoided.

Based on the above options the following flow chart could be defined.

image

Based on the above chart, the scenarios were decoded in the following manner.

  • · Basic value mapping requirements –  

This is case in which the mapping was not available on the source. The volume of data was around 5000 records and the frequency of use was high.  Thus a consolidation system like SAP ECC was chosen to maintain the cross reference; this was then replicated to PI value mapping cache using the standard value mapping replication techniques.

This is resulted in high performance of frequently run interfaces.

  •   Master Data Cross reference –

  This is the use case in which the data volume was high, data was not available at source and performance requirements were high. The value mapping cache was not the option to go with. The data was replicated to the ABAP Stack tables, and a local RFC look-up was done from PI mapping.

This RFC calls were more optimized than cross system RFC/SOAP look-up calls.

As the cross reference was available at MDM, a simple replication interface was built from MDM to PI (using Receiver RFC adapter) to keep the data up to date.

The PI version used here was PI 7.11.

The Code Life Cycle - Managing Custom Code?

In majority of projects that implement SAP PI, developments that require custom adapter modules and java proxies are predominantly less in number. But there are projects in which the most of the developments are based on such custom codes. Also with the single stack concept gaining importance, these developments will gather more storm.

How do you manage your Java code when it comes to SAP PI?

I assume most of us don't really bother. We get our coding done on NWDS, compile the code locally, export the Jar or EJB, load them into the server and Voila, we are done.

But as the code base increases, managing this gets tricky. PI, out of the box doesn't provide you any help either. You can edit your code in ESR but only hope for it to compile on itself. If it could, it would have been wonderful, aint it?

One of the recommended ways is to implement NWDI and have the code centrally maintained. But I personally haven't found it easy configuring NWDI and start hosting my code. Ignoramus? Mea Culpa! (Maybe SAP can indeed host a good how to guide to help us with this?)

With Eclipse integration into ESR, I believe this will open a new dimension of possibilities. Imagine code management via the IDE integrated with a check in - check out version management feature.

I hope this is what SAP is aspiring toward. In any case, I have already posted this on Idea place and will wait to see if anyone picks this up.

Question:

How do you manage your custom Java code? Do you use NWDI extensively for SAP PI projects? Or is there any other way you are achieving this in your SAP PI projects?

Flexible Serialization in SAP PI

Let us start by defining what strict serialization is - Serialization during message processing is a feature by which messages are delivered to the target in the same sequence as generated by the source via a middleware.

The quality of service defined for the same in EOIO (Exactly Once in Order).

Serialized messages are processed in a specified queue both on the middleware and the target system. In case of a processing error on any particular message in the queue, the message queue gets blocked; all subsequent messages go on a wait state, till the error is resolved.

Some implementations have a unique requirement for Flexible serialization. In this case, there is a need of serialization of messages and in case of error the other messages can continue, staying serialized, ignoring the failed one which can be attended manually at a later point of time.

The following are the advantages of flexible serialization.

1.  The message processing remains serialized in majority of error-free cases.

2.  The message processing for an interface does not halt due to the failure in one message on the queue.

3.  An error notification, informs the support team of the failure and the subsequent manual action can be performed by them.

It should be noted, that Flexible Serialization is implemented only in cases where, the percentage forecast for failures in less, and manual re-conciliation is possible.

SAP PI, like most middleware systems does not support this out of the box.  A simple WATCHER report can be written based on QRFC APIs in PI and connected ABAP Based SAP systems to achieve the same.  The purpose of this watcher program would be to move the error message out of the active LUW SMQ2 to a saved LUW SMQ3 for all queues requiring Flexible serialization.

The following would be the details of the watcher operation.

1.  Watcher to Monitor all specified Serialized queues on SMQ2

2.  The Watcher looks for specific failure statues like SYSFAIL or CPIC

3.  Upon determination of an error the watcher checks for the message depth on SMQ2 (running/active LUW). . The business team can define the active queue depth,  for flexible serialization to work

4.  The watcher also looks at the depth on the saved queue or SMQ3. This would be important to prevent message movement if there is a generic issue on the interface, which can impact every flowing message.

5.  The watcher based on the above parameters would then move the message to SMQ3.

6.  Finally an alert notification should be send out to the respective users, specifying the movement.

The following is a selection screen.

Selection Screen

The following is the code snippet for the same.

<!--StartFragment --> TYPE-POOLS: salrt.
* Get all errored inbound queues
TABLES: trfcqin.
SELECT-OPTIONS: s_qname FOR trfcqin-qname,
                s_eqname FOR trfcqin-qname,
                s_qstate FOR trfcqin-qstate,
                s_iqname FOR trfcqin-qname.
PARAMETERS: p_pdf AS CHECKBOX DEFAULT 'X',
            p_qdepth TYPE int4,
            p_sqdep TYPE int4.
DATA: v_err_qs TYPE sy-tabix,
      v_err_qss TYPE string,
      v_err_msg(200) TYPE c,
      v_msg TYPE string,
      v_trfcmess TYPE t100-text,
      v_qstate TYPE trfcqin-qstate,
      o_cont TYPE  REF TO if_swf_cnt_container,
      o_err TYPE REF TO cx_root,
      w_qin TYPE trfcqin,
      w_qstate TYPE trfcqstate,
      w_tid TYPE arfctid,
      t_no_pdf_cont TYPE soli_tab,
      t_qin TYPE TABLE OF trfcqin,
      t_qstate TYPE TABLE OF trfcqstate,
      v_q_deep TYPE sy-index,
      ls_message TYPE zsca_string,
      lt_message TYPE zttca_string,
      lt_qview TYPE TABLE OF  trfcqview,
      ls_qview LIKE LINE OF lt_qview.
*CONSTANTS: C_INT_ID TYPE ZOBJID VALUE 'ECC_BLOCKED_QUEUES'.
START-OF-SELECTION.
*Identify the queues
  CALL FUNCTION 'TRFC_QIN_GET_ERROR_QUEUES'
    EXPORTING
      client      = sy-mandt
      with_qstate = 'X'
    TABLES
      qtable      = t_qin
      qstate      = t_qstate.
  DELETE t_qin WHERE NOT qname IN s_qname.
*  DELETE T_QIN WHERE QNAME IN S_IQNAME.
  IF NOT s_qstate[] IS INITIAL.
    LOOP AT t_qin INTO w_qin.
      CLEAR: v_qstate.
      CALL FUNCTION 'TRFC_QIN_STATE'
        EXPORTING
          qname                   = w_qin-qname
        IMPORTING
          qstate                  = v_qstate
*         QLOCKCNT                =
          qdeep                   = v_q_deep
*         QRESCNT                 =
*         WQNAME                  =
*         ERRMESS                 =
*       EXCEPTIONS
*         INVALID_PARAMETER       = 1
*         OTHERS                  = 2
                .
      IF sy-subrc <> 0.
      ENDIF.
      IF  v_q_deep < p_qdepth.
        DELETE t_qin.
      ELSE.
        CHECK NOT v_qstate IS INITIAL.
        CHECK v_qstate NOT IN s_qstate.
      ENDIF.
    ENDLOOP.
  ENDIF.
*   * Move the first LUW to the saved queue
  LOOP AT t_qin INTO w_qin.
    CLEAR: w_tid.
    CALL FUNCTION 'TRFC_QINS_OVERVIEW'
      EXPORTING
        qname  = w_qin-qname
        client = sy-mandt
      TABLES
        qview  = lt_qview.
    READ TABLE lt_qview INDEX 1 INTO ls_qview.
    IF ls_qview-qdeep < p_sqdep.
      CALL FUNCTION 'TRFC_QIN_GET_FIRST_LUW'
        EXPORTING
          qname                   = w_qin-qname
          client                  = sy-mandt
*     NO_READ_LOCK            = ' '
       IMPORTING
         tid                     = w_tid
*     QSTATE                  =
*     WQNAME                  =
          errmess                 = v_trfcmess
*     FDATE                   =
*     FTIME                   =
*     FQCOUNT                 =
*     SENDER_ID               =
*   TABLES
*     QTABLE                  =
       EXCEPTIONS
         invalid_parameter       = 1
         OTHERS                  = 2
                .
      IF sy-subrc <> 0.
* MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
      ELSEIF w_tid IS NOT INITIAL.
*      IF NOT S_EQNAME[] IS INITIAL AND W_QIN-QNAME IN S_EQNAME.
*        WRITE: / '*', AT 3 W_QIN-QNAME, AT 40 W_TID, AT 70 V_TRFCMESS.
*        CONTINUE.
*      ENDIF.
        SUBMIT rstrfcdk AND RETURN WITH tid = w_tid.
* Activate the QIN scheduler for this queue.
        CALL FUNCTION 'QIWK_SCHEDULER_ACTIVATE'
          EXPORTING
            qname = w_qin-qname.
        WRITE: / w_qin-qname, AT 40 w_tid, AT 70 v_trfcmess.
      ENDIF. 

The above queue movement should be notified to the business user, with the help of any alerting mechanism.

Idea Place and a case for SAP PI

Almost a year ago, I posted a blog titled 'Contribute to better a Product? Do you have ideas for SAP PI?'. The objective was to get a dedicated page for Process Integration so that consultants, users and customers, if interested could post their thoughts and suggestions on improving the product.

Currently Process Integration and the future is under review. The wait has exceeded a year now but the interest of many fellow community members in this area has not diminished. Recently we had Prateek Raj Srivastava, open a wiki page for suggesting improvements to PI. I also met with a customer who said he has initiated a dialogue with SAP in suggesting improvements to the product. There are user groups who are focused on product improvements to PI but then it is a closed network and not all can discuss and contribute their ideas.

We need a place for ideas and the question I have is 'Is Idea place the right place for this specific need?'. I hope someone managing Idea place can answer it. I personally believe that it is since it will be open enough for consultants, customers and other stakeholders. But if I am proven wrong, then the wiki page started by Prateek will need to be publicized more so as to bring it to the attention of contributors and get the ideas flowing.

PI has matured over the years but there is still a lot that has to go into the product to make it an ideal fit for a multitude of requirements. The community is keen to contribute and I am sure SAP will be happy to listen.

Handling two different messages within mapping

In one of our project assignments, we had a kind of strange, or a rather tricky requirement. This is was supposed to be a normal XML file to a web service scenario, but the situation was that, the source system gives an XML file, which contains customer’s complaints / acknowledgments. The file naming scheme for both complaints and acknowledgements was the same, and we had to process only those XML files that had complaints in them. There was no way to identify the type of data the files had, as both the complaints and acknowledgements had same file name, but different root tags.

After a lot of research, the only solution we came up with is to have two different outbound interfaces, one for complaints, and the other for acknowledgements. Firstly, pick the file, convert it into a string using a java mapping, then use the graphical mapping to split it into complaints / acknowledgements.

Below are the steps:

First, we have to create a dummy data type with just one element in it, as shown in the below screenshot.

Then create the Message Type and Message Interface for this data type and set the direction of the message interface as Outbound.

This interface is going to be used as the Sender Message Interface.

Then import the XSD / create the data type & message types for the complaint and acknowledgments. Create their message interfaces accordingly with direction as inbound.

In the next step, we have written a simple java mapping program to convert the given XML input into String and import it into the “Imported Archives”. The output of this java mapping is passed to a graphical mapping, which creates the XML message for complaint / acknowledgement based on the input.

Java Mapping:

The graphical mapping is a one-to-many mapping, with the one outbound interface, and two inbound interfaces, with the occurrence of the inbound interfaces defined as “0..1”. The inbound interface is our dummy inbound message interface, and the inbound interfaces would be the Complaints and Acknowledgments interfaces.

The graphical mapping takes the XML input in String format, uses a substring UDF fetch the values.

The UDF “getString” in the above screenshot takes the XML string, the index of the beginning and ending tags of the element “messagetype” as input and returns the actual value of this element called “messagetype”. If this value is “Complaint” then, the target structure “ichicr”, which is for complaint is created, else, the other structure “ichicsrack”, which is an acknowledgment.

Code for getString():

The same logic is used to retrieve the values of other elements, and are populated into the target element as shown in the below screenshot.

Then, create the operations mapping as shown below:

This mapping output returns either of Complaint / Acknowledgement messages at a time and the other message would be empty. Create the configuration scenario in the Integration Directory, with one sender and two receivers.

Result of Operations Mapping:

1. For Complaints:

2. For Acknowledgments:

Hope this blog is helpful.

SAP Developer Network SAP Weblogs: SAP Process Integration (PI)