SAP PI Training @ SAPNWTraining.com

SAP PI Training @ SAPNWTraining.com

Graphical RFC Lookup and How you can design lookups for multiple values to be retrieved in a single call

Graphical RFC Lookup and How you can design lookups for multiple values to be retrieved in a single call


The Requirement:

1. Lookup for a value in ECC and populate the target with the result

2. The input data has the node (that contains the input field to the RFC lookup) with occurrence as Unbounded

3. The target node (that holds the target field) is Unbounded

The Ideal Design:

The requirement is fairly easy. You can do a simple lookup per input value and retrieve the data. But when the input field occurrence is more than say 1000, it means there is a huge overhead in terms of having to do that many lookups. (Imagine 1000 RFC calls in one single execution of the mapping)

So the RFC lookup should ideally take multiple values as its input and respond back with multiple values.

What we can do:

1. Design a RFC which will have a Table parameter.

image

Note: The structure referred has three fields (optional fields). The first two fields can be used as input and the third field is the corresponding output. In this example, only the postcode field is being filled.

2. Use the RFC in the RFC Lookup function.

The trick here is to design your mapping so as to have a single call initiated instead of multiple calls. In PI Terms, that means you will have to play around with the context. Sounds easy, but we all know playing around with contexts can be a pain and in a way fun :)

image

For the RFC to be execute only once, we will have to create the complete RFC request structure (multiple item nodes) at one go, initiate the call, get the response and close the accessor.

So the logic for the mapping would be to replicate the 'Item' node of the RFC and provide the input value to each corresponding Postcode field of the Item node.

Display Queue of the RFC lookup Function:

image

What the above means is that there is one call, with 12 Item nodes each containing a Postcode as input.

On execution, with the trace level as ALL you would be able to see the RFC request being build as we expect it to be;

image

The complete Log (for the RFC request being build and receiving the response in one single call) can be seen here;

RFC Communication channel log:

I executed the mapping 5 times at different intervals. The log only shows 5 RFC calls made;

image


Conclusion:

Designing the RFC lookup to make a single call can improve performance considerably. Thus when you have a scenario which deals with multiple inputs (multiple fields i.e unbounded nodes), the RFC should be developed to have table parameters and the graphical mapping designed to make the right call.

Credits:

For the ABAP code for the RFC to work - Kathirvel Balakrishnan, a genius when it comes to code in ABAP

For the graphical mapping - I was too lazy to do that :) Thanks to Madhav Poosarla (one of the most dedicated colleagues I have worked with) for fitting in the pieces

SLD Topology: How-To Gather and Distribute SLD-Data in Your IT-Landscape?

Introduction

This blog is meant to summarize discussions I have had with dozens of customers and colleagues. While all the answers are written down and updated in the SLD Planning Guide, it might be useful to start with a short blog to get the big picture first and then check the much more comprehensive guide for details and exceptions.

There are three things that define your SLD strategy: Some general things like the separation of development and production, the SAP applications you use and the way SLD systems exchange information.

SLD Topology

SLD topology decribes the required number and area of installation of SLD systems in IT landscapes and the connections between them.

Reasons to Have more than One SLD

Many applications rely on SLD data. Among these are - most prominently - SAP NetWeaver Process Integration (PI), Web Dynpro for Java, and the SAP Solution Manager. Reasons to have more than one SLD system are manifold:

  • Areas in front of and behind a firewall: If something needs to be available in front of and behind a firewall, you need at least two systems.
  • Separation of development, quality assurance, and productive systems: Most easily and most importantly this can be explained for PI - Business Systems are developed before they are meant to be used productively. The easiest way to separate non-productive and productive state is to separate the development and productive SLD.
  • Separation of managing and managed systems: The SAP Solution Manager does not recommend to have runtime dependencies to productive systems but on the other hand recommends to have a "local" SLD activated on the SAP Solution Manager system: Since SLD data are crucial for all applications mentioned earlier, at least one more SLD is required if any other SLD client application is used
  • Having 24/7 business hours does not allow for maintenance causing downtime: A backup SLD system is needed to ensure the availability of SLD data.
  • Having extremely high security requirements, where certain SLD systems need to be isolated from the net may also require isolated SLD systems.

Therefore, there is only one scenario where one SLD would be enough: This you find in landscapes where of all client applications of the SLD only the SAP Solution Manager is used, but neither PI, nor Web Dynpro for Java, etc... If that is your landscape, activating the SLD on your SAP Solution Manager system is sufficient. If not, I hope the following will help you

Mechanisms of Data Exchange between SLD Systems

Once you have found out that you need more than one SLD, you should define how to transfer data from one SLD to the other to address the needs of all client applications of the SLD and minimize the manual effort.plan your SLD strategy.

There are three mechanisms of data exchange between SLD systems:

Transport Mechanisms of the SLD

Figure 1: Mechanisms of Data Exchange between SLD Systems and their use cases.

From this table you can see that in a landscape more than one mechanism will be used, because there are different advantages in all of them.

  • Automatic forwarding (also name bridge forwarding) transfers all technical systems' data a (source) SLD gets from the data suppliers unaltered to any target SLD. Since this data is sufficient for most clients and cannot be retrieved otherwise, this method is very important in each landscape
  • Export/Import allows transporting all kind of data and provides full control of the point in time when changes become available in the target system. It is therefore mainly used in PI scenarios, where Business Systems and Products/Software Components are developed in the SLD to be available in time - not too late, but not too soon either! - in the productive system. Export and import of SLD data can be handled centrally with CTS+. Another use case for export/import of SLD data is the high security settings mentioned earlier. In extreme cases, a special user can personally walk to a SLD carrying the update with him.
  • Full Automatic Synchronization transfers each and every SLD data immediately to all target system without bothering you with messages, which is excellent - in many but not all cases: For example, it helps in SLD house-keeping, because among all other data also deletions are transported. This, however, is also the reason why this sync method cannot be used for all SLD data transfers: It does not provide control of any kind (apart from activating and de-activating the sync all the time). Full automatic synchronization is therefore not to be used for the PI scenario described under Export/Import - I mentioned that the sync would also transport deletions for example - and think of a Business System deleted erroneously in the Dev-SLD and ceasing to exist in production seconds later... Therefore, there remain two main use cases for this mechanism: 1st is creating a hot-backup SLD [LINK to Maintenance-BLOG] to avoid gaps in SLD data availability caused by a planned downtime and 2nd is the initial "data load" of a newly installed SLD system to prepare the migration of SLD data.

Note: In your landscape, these mechanisms will be combined (as shown in the default landscape below). Nevertheless, you must not use several synchronization mechanisms for the same SLD data on the same synchronization path (for example, you do not set up fully automatic synchronization from SLD A to SLD B and configure automatic forwarding from SLD A to SLD B because both mechanisms would synchronize landscape data. However, you could supplement automatic forwarding with regular export/import of manually entered data from SLD A to SLD B).

For details and the availability of these mechanisms, see the SLD Planning Guide.

Default SLD Topology - SLDs in a Typical IT Landscape

From the things said so far, we get the following recommendation for a typical landscape where several SLD client applications are used:

Default SLD Topology

Figure 2: Default SLD topology in a landscape where Process Integration is used

Recommended SLD topology in a default landscape (as shown in figure 2):

  • The (runtime) SLD of the productive systems' area acts a central SLD gathering all technical systems' data via SLD Data Suppliers. It forwards this data to all other SLDs. This SLD can run standalone on a dedicated server or, for example, on the PI server.
    In many cases a high-availability setup is used for this SLD.
  • A separate (design time) SLD exists in the development area. Business Systems and Products/Software Components created here are exported from and imported into the productive SLD - preferably using CTS+ to manage exports and imports centrally.
  • Optionally, a backup SLD can be added to the landscape and kept up-to-date by full automatic synchronization with the central SLD.
  • Client applications (such as PI, NWDI, or front ends like Browsers or the Developer Studio) access the SLD of their area or according to their function.

QA and sandbox SLD systems can also be added; these would be part of the non-productive area (such systems are not shown in the picture).

Sizing of SLD Systems

In all landscapes sizing of systems is a topic. You'll find details regarding this topic in the Blog on SLD Sizing.

SLD Topology in Big Landscapes

Generally speaking, the design of the SLD topology in big landscapes in principle works as for the default landscape: The needs of the client systems are the same; therefore the data transfer mechanisms stay the same. Nevertheless, some new challenges may appear, such as the existence of more than one set of PI-systems or the wish to have more than one full automatic sync connection between SLD systems. While the design of such a landscape cannot be describes in one blog and needs a project to identify and address all business needs, some hints might help.

Special Use Cases in SAP NetWeaver Process Integration

Business Systems' names need to be unique. In very big landscape with separated PI systems - or after a merger with another company also using PI - you could run into trouble, when identical Business System names "collide" in one SLD. The only solution here is to ensure the uniqueness of these names organizationally.

Topologies of SLDs with Full Automatic Sync Connection

Usually, full automatic synchronization is used to keep backup SLDs in sync. It is not recommendable using a full automatic sync connection to sync SLDs of DEV and PRD area; you cannot update the SAP Solution Manager's SLD and - as described earlier - this is not necessary either.

A new point is that in big landscapes more than 2 SLDs might have a full automatic sync connection. Many topologies are valid, but the "unique path principle" must be fulfilled: From any SLD to any other there must only be 1 path for messages to guarantee that messages are handled in the correct sequence.

Unidirectional circular connections are supported: A message is only accepted once in any SLD so the transport of messages stops at the origin.

The following figure shows examples of valid topologies of SLDs in full auto sync. Only SLDs with full automatic sync connections are shown; more SLDs with other connection types can be added:

Valid Topologies for SLDs in full automatic synchronization

Legend of SLD full automatic sync topologies

Figure 3: Valid topologies in SLDs using full automatic synchronization

  • Bi-directional full automatic synchronization connection between two systems (SLD 1 and SLD 2) - use case "backup". (This is the only bi-directional circle allowed, because there only is one other SLD. Of course a uni-directional connection would be valid in any case where a bi-directional connection is allowed.
    More SLDs can be added, as long as there are no circles introduced.
  • Star topology (SLDs 1-4); SLD 1 would act as a master SLD here.
    This can also be combined with a linear connection (see SLD 5) or other "stars" as long as the unique path principle is valid.
    (The part SLD 1 <-> SLD 2 <-> SLD 5 is a longer variant of the backup topology.)
  • Valid circular topology (SLDs 1-4) can be used with uni-directional connections. Introducing bi-directional connections (even 1) would violate the unique path principle. This also only works with all connections going in the same direction (clockwise or anticlockwise): Changing the direction of less than all connection will invalidate the setup (see "invalid topologies".

Valid topologies can be combined with other topologies, as long as the unique path principle is valid. For example this would be a development SLD receiving data via bridge forwarding from an SLD being in full auto sync with others and sending back data by import and export.

In all invalid topologies there exists more than one path for messages between two SLDs - this should mainly happen in circular topologies:

Invalid Topologies of SLDs in Full automatic Sync

Legend of SLD Topologies in Full Automatic Sync

Figure 4: invalid topologies in full automatic synchronization

  • Invalid circular uni-directional topology (SLDs 1-4) with 4 uni-directional connections.
    This example shows that there also is a way to create invalid topologies with uni-directional sync connections: here, a message X in SLD 4 can reach SLD 2 via SLDs 1 and 3 - successor message Y in SLD 1 might reach SLD 4 earlier than message X, resulting in incorrect data.
  • Invalid circular bi-directional topology (SLDs 1-4) with 4 bi-directional connections.
    A message X in any SLD can reach any other SLD clockwise and anticlockwise - successor message Y in SLD 1 might reach SLD 4 earlier than message X, resulting in incorrect data.

Other Hints for SLD Data Distribution

One last remark: A similar problem as shown for the unique path principle violation in full automatic sync connections can occur when the same information was gathered in more than one SLD and is then aggregated in one SLD. This could cause problems especially when renaming of objects is involved.

Related Information

PI/XI: ActiveMQ - free but yet powerful JMS provider

PI/XI: ActiveMQ - free but yet powerful JMS provider


When I wanted to try out SAP PI's JMS adapter for the first time I was wondering is there
any way I can use it with free JMS provider. I knew about MQSeries, SonicMQ but those
were a little bit too big to start with especially if you don't have a license. Then I found out
about J2EE's JMS queue which can be used for this purpose and it's usage was later on presented in one of William Li's blogs:
How to use SAP's WebAS J2EE's JMS Queue in Exchange Infrastructure
This was something I was looking for but the biggest drawback of that approach was
that where was no monitoring nor testing tool for J2EE's JMS Queues. It turns out however
that there are a few alternatives and I'd like to show you one of them - ActiveMQ by Apache.

ActiveMQ is a powerful JMS platform and can easily integrate with SAP PI. Below you can find
a step by step guide on how to make it talk to SAP Process Integration.

1. at first you need to download the latest version : http://activemq.apache.org/download.html

2. then you need to use file activemq-all-5.3.0.jar in order to create deployable driver as per OSS note:

1138877 PI 7.1 : How to Deploy External Drivers JDBC/JMS Adapters

(remember to remove the content from javax/jms folder as per description in OSS note)

3. next you can deploy the JMS driver to SAP PI using JSPM (in case of using SAP PI 7.1)

4. then you should get your ActiveMQ running by following the guide published on this page: http://activemq.apache.org/getting-started.html

5. once you get it running you can open page: http://localhost:8161/admin and create two queues that will be used later on with SAP PI (one for outbound and one for inbound scenario) my queues will have names: testdlaPI (outbound) and testdlaPIIB (inbound)

6. now once this is done we can create our JMS communication channels. You need to select Generic JMS configuration and fill in the data as per the screenshot below for both channels (just the queue name will be different)

Remember that port is in most cases 61616 unless any other is specified in configuration file: activemq.xml


7. then you can check if the channels are working properly in communication channel monitor




8. if they are working correctly you can create a whole JMS - JMS scenario using those two channels

9. if both channels are working we can try to put a message to one of our queues (outbound)
as shown per screen below by clicking on the queues - send to link on the main screen of activeMQ administration page (we only need to put the message we can leave all the rest of the parameters as they are - they can be used later on for some other tests)



10. once we click the send button we should be able to see that in a few seconds the message will be available in the inbound queue and gone from the outbound queue


Hope this blog gave you a nice idea on how easy can JMS configuration be when used with activeMQ.

Consuming Webservices with tag in WSDL using XSLT

Consuming Webservices with tag in WSDL using XSLT


Have you seen the blog Consuming Webservices with tag in WSDL using ABAP where posted the ABAP solution of empty structures in .Net WEB services? This blog shows another approach - XSLT mapping program.

The problem is in that fact that WSDL file generated with .Net WEB service (MS Project server, MS SharePoint server, etc.) contains link

in element

 ... 

of message type definition. Here target is in "namespace" attribute. Using this WSDL in SAP XI/PI, you will get a message without fields in message mapping. At runtime there will be only one element with whole message into it as its value.

You can modify WSDL copying this structure in "sequence" element of cause but to my mind it is not right way.

Using ABAP is not suitable in some cases, especially when you haven't got OSS key.

I have solved this problem using XSLT.

So, I got a response from MS Project's WEB service like this:

 


...



674457db-7ff1-4e24-81b3-b41d5594f27f
Project1


3dc33ef5-c796-4b01-8707-3b1f225f741c
Project2




In this message I want to get a list of Projects UIDs ().

First step was "cut" redundant namespaces using XMLAnonymizerBean module

image

Next, I wrote script like this:




















In this script templates





search elements in input XML, call next template and put its result in element of output file. If next template is not exists, it returns value of current element.

So, my response from MS Project transforms to:

 


674457db-7ff1-4e24-81b3-b41d5594f27f


3dc33ef5-c796-4b01-8707-3b1f225f741c

It works!

And the last step - is to archive an XSLT file, import it as Imported Archive and choose it in Interface Mapping as mapping program (type=XSL).

P.S.

If you need to get 2 or more fields, your XSLT should be a changed. For example, XSLT for Projects IDs and Names:





























Output:




674457db-7ff1-4e24-81b3-b41d5594f27f
Project1


3dc33ef5-c796-4b01-8707-3b1f225f741c
Project2

SAP PI - Reorganization of ccBPM Messages

SAP PI - Reorganization of ccBPM Messages


1 Introduction

When using ccBPM (cross-component Business Process Management) in SAP Exchange Infrastructure or SAP Process Integration problems might occur with archiving of ccBPM messages or deletion of ccBPM message persistency. Typically these problems show only after a while when database tables grow and slow down the system. This article gives a short overview about ccBPM message deletion and describes some situations which can lead to problems with ccBPM message deletion or archiving.

2 Overview ccBPM Message Reorganization

During execution of a ccBPM scenario there are three different types of ccBPM runtime data or messages produced that need to be reorganized:

1. PI Messages (including ccBPM copies in pipeline PE_ADAPTER)

2. Process Instances (Work Items)

3. ccBPM message proxies or ccBPM message persistency in Tables SWFRXI*

The following sections describe how to reorganize the relevant data.

2.1 Archiving or Deleting the PI Messages

Sometimes problems occur with PI Messages that cannot be deleted or archived because of their message and adapter status. To analyse this situation a special report RSXMB_SHOW_STATUS was provided with SAP Note 944727 - Status overview of XI messages. It checks the PI messages in the system regarding their message status and adapter status. As a result it gives an overview about how many messages can be archived or not and shows the number of messages per status.

The report might have a long runtime, so please make sure that SAP Note 1236724 - XI3.0/7.0x/7.1x: Termination of report RSXMB_SHOW_STATUS is implemented on your system and execute the report as a batch job.

The report shows the number of messages in different message statuses:

image

For messages in message status 003 (successful) it also includes the adapter status:

image

From ccBPM point of view messages with message status 029 or message status 003 combined with adapter status different from 000 or 006 typically cannot be reorganized and have to be analyzed in detail.

Usually the procedure described in SAP Note 1042379 - BPE-HT: Deleting messages that are no longer used helps in these cases. The note does not have to be implemented, it just describes the procedure to delete old messages and clean up the system from failed process instances. It is important that each step is executed one after another. When there are many ‘old’ process instances in status ‘STARTED’ or ‘ERROR’ in the system that shall not be restarted, maybe it is not applicable to logically delete the messages manually as described under section 1c) of the note. Then the mass cancellation feature in transaction SWIA introduced with SAP Note 1286336 - SWIA: New "Logically Delete" function for mass processing might be used.

2.2 Archiving or Deleting the Process Instances (Workflows)

It is important not to forget to archive or delete the process instances that are created when using ccBPM. If workitem archiving or deletion is not done regularly this leads to an increasing table size of SWWWIHEAD and other workflow tables like SWWCNTP0, SWPCMPCONT or SWWLOGHIST.

The procedure to archive or delete the workitems is described in SAP Note 836092 - Archiving/deleting work items in the XI .

2.3 Deleting ccBPM Message Persistence Data

During execution of a ccBPM scenario there is also additional data created for the ccBPM message persistence or also called ccBPM message proxies. These message proxies have to be deleted regularly. These message proxies and related process-specific data are stored in tables SWFRXICNT, SWFRXIHDR and SWFRXIPRC.

The two reports used for displaying respectively deleting entries from these tables are RSWF_XI_INSTANCES_SHOW and RSWF_XI_INSTANCES_DELETE, which are described in SAP Help Page "Deleting Process Data No Longer Required". RSWF_XI_INSTANCES_SHOW can be used to display and check the relevant data first. Here it is important to use also the deletion timestamp as selection criteria. Otherwise there are also messages shown for which the deletion timestamp is not set. Such messages show no value in the second column from the right. Message proxies with no deletion timestamp set cannot be deleted. In this case also the procedure of SAP Note 1042379 - BPE-HT: Deleting messages that are no longer used should be applied.

There are a few additional SAP Notes available for the report RSWF_XI_INSTANCES_DELETE, which correct program errors in this area. Please check if the following SAP Notes are already implemented on your system:

1421003 - BPE-RUN: Synchronous message proxies cannot be deleted

1157044 - BPE-RUN: All message proxies set as deletable

1147377 - BPE-RUN: Messages not picked up by reorganisation

There are also two SAP Notes that improve the performance of report RSWF_XI_INSTANCES_DELETE:

1163662 - BPE-RUN: Poor performance from RSWF_XI_INSTANCES_DELETE

1139941 - BPE_RUN: Bad performance of RSWF_XI_INSTANCES_DELETE

3 Analyzing Problematic Messages in Detail

When there are many problematic XML Messages in the system, it is useful to search for the reason why they cannot be reorganized. Either the situation is caused by some programming errors, which are already corrected with the SAP Notes mentioned earlier in the blog or it can be also caused by process design errors.

This section shows some examples why the XML Messages stay in a not reorganizable status, the most common cases are these two:

  • Messages with message status 003 – successful and outbound status 001 – Scheduled for Outbound Processing, shown with a green flag in column ‘Outbound Status’ in monitoring transaction SXI_MONITOR
  • Messages with message status 029 – Transfer to Process Engine

3.1 Messages in Status 'Scheduled for Outbound Processing' in SXI_MONITOR

There are messages with a green flag (‘Scheduled for outbound processing’) in column ‘Outbound Status’ in monitoring transaction SXI_MONITOR. These messages are in pipeline ‘CENTRAL’.

Report RSXMB_SHOW_STATUS shows messages with message status 003 and adapter status 001.

In SXI_MONITOR for such a message clicking on the PE or SA_COMM hotspot in column ‘Outbound’ leads to:

  • an empty qRFC monitor for Outbound Queues:

    - Then it could be the case that the entry in the outbound queue was deleted manually.

    - Or the message was buffered (parked) at a process instance that is now already finished. In SXI_MONITOR you sometimes see the message ‘Message 0123456789ABCDEF was parked at process 012345678901, but not yet processed’. Usage of global correlations in combination with delivery mode ‘Buffering possible’ can be the reason for it. In standard view of transaction SXI_MONITOR check the column ‘Queue ID’ for the queue name that contains the task name (‘WS’, xxxxxxxx being an 8-digit number). This task name stands for the process definition that the message was or should be delivered to. For a solution and details on buffered messages see recommendations beneath under ‘Messages were buffered at a Process Instance’.

    - Possibly the message could not be delivered to a process instance because of a missing correlation and the system was configured to ignore this error (configuration parameter ERROR_ON_NO_CRL_FOUND = 0). SAP Note 1094028 - BPE RUN: Queue is stopped if no correlation exists describes how to find these undelivered messages in CCMS Monitoring.
  • a qRFC monitor with a queue in status ‘SYSFAIL’, then see SDN blog How to Analyze Stopped Queues in ccBPM about queues in this status and how to resolve it
  • a workflow log of a process instance in status ‘STARTED’. Then you also see that the message was buffered at the process instance (in SXI_MONITOR you get the message and as entry in the Tab ‘Step History’ of the technical workflow log). Check whether the process instance will reach a receive step that will process the message or why it is still in status started. If the message shall not be delivered to this process instance, then also global correlations in combination with Delivery Mode ‘Buffering Possible’ can be the reason for the problem. See recommendations beneath under section ‘Messages were buffered at a Process Instance’

3.1.1 Messages were buffered at a Process Instance

You can identify this problem also in CCMS Monitoring in transaction RZ20. Choose ‘CCMS Monitor Templates’ ->’Exchange Infrastructure’ or ‘Process Integration’-> ’_xxx_Business Process Engine’ -> ‘Process Data’ -> ‘XML Messages’ -> ‘Unprocessed Messages’. You can find entries like the following:

image

Due to an open correlation the message was delivered to a process instance, but the process instance had no active receive step at this time that could receive the message. Nevertheless the Delivery Mode ‘Buffering Possible’ was defined for this process using transaction SWF_INB_CONF or by default. If the process instance, that the message was delivered to and buffered at, never reaches a matching receive step the message remains unprocessed.

Often using a global correlation can be the reason for the problem. Then the solution is to change the process definition in the Integration Repository so that the process is divided into a message reception part and a transformation/send part. Then the correlation can be defined local at a block that contains only the receive phase of the process. In that case the correlation is only valid during message reception. Time consuming transformation and send steps are executed outside the block when the correlation is not valid anymore.

SAP Note 1040354 - BPE-TS: Unprocessed Messages helps in these cases.

You can find these unprocessed messages via report RSWF_XI_UNPROCESSED_MSGS. This report is also available via a CCMS Monitor Template, see also SAP Note 1040354 above for details. The note includes also a reference to SAP Note 894906 - BPE-RUN: Finding unprocessed messages, which describes the features of report RSWF_XI_UNPROCESSED_MSGS. It is possible to resend the affected messages with the report.

If the messages shall not be reprocessed, it is possible to finalize these messages without processing them by using report RSWF_XI_UNUSED_MSGS. Please see SAP Note 894193 - BPE-RUN: find messages that are no longer used and delete for documentation of the report.

You can also change the Delivery Mode using transaction SWF_INB_CONF to ‘Without Buffering’ for the affected process. Then in case a message cannot be received directly by a process instance, the queue is set to status ‘SYSFAIL’ and all following messages for this process are waiting in the queue until the error is resolved and the queue is restarted afterwards.

3.2 Messages with Message Status '029 - Transfer to Process Engine' in Pipeline 'PE_ADAPTER'

Transaction SXI_MONITOR shows messages with message status 029 (Transfer to Process Engine, symbol is a grey arrow).

When executing the report RSXMB_SHOW_STATUS it shows messages with message status 029 in the system. These can be messages from process instances which did not end correctly, the process instances are in status ‘ERROR’ or ‘STARTED’. These process instances can be found using transaction SXMB_MONI_BPE and the different transactions available there (e.g. Process Selection).

If there are many more messages with status 029 in the system than processes that are not finished, then there could be a problem with the message status change when finishing a process. Check if every SAP Note mentioned in this blog is implemented in your system, especially the ones mentioned in section ‘Deleting ccBPM Message Persistence Data’ and in SAP Note 1042379.

Also these messages occur combined with the messages in section 3.1, so avoiding the problems that lead to messages in status ‘Scheduled for Outbound Processing’ reduces these messages for the future as well.

IDoc Packaging - SAP PI 7.1 EHP1 (and above)

IDoc Packaging - SAP PI 7.1 EHP1 (and above)


Prior to PI 7.1 EHP1, to collect IDoc (package) we used to have a BPM or workarounds as mentioned here

From PI 7.1 EHP1, you will see a new feature that will help design scenarios in a better way when it comes to collecting IDocs.

The IDoc Sender adapter now is active and with a purpose. You can now define a package size in the adapter. The package size defined here will be used to create IDoc packages in PI.

image

The above screenshot shows you the IDoc sender adapter in PI 7.1 EHP1

So how does this feature help us?

With IDoc packaging now available as a standard feature in the adapter, this will help redefine some of the design approaches to integration scenarios. Some examples are as below;

1. Collect all Work Orders (ex. ZWordorder.ZWorkorder) and create a file out at 6PM everyday

In XI/PI versions < 7.1 EHP1,

a. You might have gone for a BPM design for collecting all IDocs

b. Use the concept of an IDoc XML file

In PI 7.1 EHP1, you can use IDoc packaging and R3/ECC would make sure they will dispatch all IDocs via a Job at 6 PM.

2. Create a file with a maximum of 1000 records only. Each record corresponds to an IDoc as the input

With the new feature, set the Package size as 1000. No BPM at all.

Thus the basic idea is an improved design and a better performance.

Configurations in R3/ECC that facilitate the Design

Partner Profile

Use the collect IDocs Option

image

Report RSEOUT00

This report can be executed or scheduled as a background job to send the IDocs

image

How does the payload look like in SXMB_MONI?

image

As shown above you will have the payload (inbound to PI) with multiple IDocs in it when using the IDoc packaging feature.

I hope this was informative. It is upto you to design your scenarios and leverage the capability as found appropriate.

PI/XI: how to get an XML file from a web page without own development

PI/XI: how to get an XML file from a web page without own development


One of the quite popular requirements mentioned on the Process Integration forum on SDN
is retriving an XML file from the web page. It may sound strange but SAP PI/XI
does not support such a functionality out of the box (as there is not adapter that
could do it). So what kind of options do we have? Do we need to build our own
adapter or at least java proxy for that purpose? It turns out there might be a few
workarounds and one of them is to use.... file adapter :)
One of file adapter features let's you use something that is called Operation system command
which allows you to start a script on the OS level. But how can this help me, you may ask?
If you've ever heard about a tool like wget then this should definately ring a bell :)
It turns out that we can quite easily use a tool like wget (or any other command line
application that downloads files from the internet to fill our requirement.

Let's take a look at a sample scenario:

We need to retrive daily exchange rates from the web page and upload them into
SAP ECC (as exchange_rate01 IDOC for example which does that in standard)

How do we do that:

1. we can create a small report in SAP ECC that will send (via RFC or ABAP Proxy)
an indicator to get the file from the web page - later on we can schedule it
to run daily in order to download the exchange rates file only once a day

2. once the message will reach SAP PI/XI it will get transformed into a file
and placed into a folder - we don't care about this file as it has only the inidicator
for the download in the payload

3. the file adapter will execute an OS command (command after processing option)
which will be a script (.bat file in case of windows) that will retrive the XML file from the web

4. in order to prepare the script we need to download wget (for windows in my example):
http://users.ugent.be/~bpuype/wget/
(you can use any other wget version)

and then this is our script's code:

@ echo off

wget http://www.ecb.europa.eu/stats/eurofxref/eurofxref-daily.xml

echo on


Point to notice
(we can also specify the directory to which the XML file should be downloaded)

5. then we only need to create a new scenario - file -> IDOC where the source will
be the XML file downloaded from the web page

Picture below shows you how this can work:



More advanced options:

Some of you might say: "But my file does not have the same name each time
it changes as it has the date inside". This is also not an issue,
try the code below to get the date:

@For /F "tokens=1,2,3 delims=. " %%A in ('Date /t') do @(
Set Date=%%A
)

@echo Date = %Date%

and later you can use this date variable to specify the name of the file

Hope this approach may some some of you some time with doing advanced (adapter, proxy)
development on SAP PI/XI

SAP ERP on iPhones

SAP ERP on iPhones


I met with a team of mobile software developers this week that have developed an interesting mobile technology that enables the user to access ALL of SAP's ERP and other SAP components on an iPhone. I am particularly intrigued because I have never seen a mobile application that can quickly enable an entire ERP with only a 25 MB download and a 15 minute set-up.

Most iPhone applications would target only specific components or business processs in ERP, but this company is enabling all of ERP. It will be interesting to watch.

The use case would be any SAP user or expert that needs to access any page or component of SAP while out of the office. It is all there on the iPhone. I was very impressed. Most often only traveling sales or service people are prioritized for enterprise mobile applications, but this technology will enable anyone in the company that needs access to the SAP system to simply login with their iPhone and go to work.

This company will be releasing the iPhone version first, and then versions for Android, Blackberry and Windows Mobile over the next few months.

BIC Mapping Designer: X2E Conversion Issues

BIC Mapping Designer: X2E Conversion Issues


Recently, we encountered an issue while sending out the EDI documents to one of our partners, and we received rejected 997 (Functional) Acknowledgments for all those messages we sent. The issue was that the segment count in the SE segment (field is D_96 in the XML representation) was being defaulted to zero. This made our partners' system reject these messages.

To start with, the field D_96 in the SE (This is a common segment in all the EDI Messages) segment of EDI 856 (Advanced Ship Notice in our case) message contains a count of all the segments within the Transaction Set, i.e., starting from ST till SE segments. The value for this field was being generated correctly in the message mapping, but was getting defaulted to zero when it was being converted to EDI, i.e., at the time of execution of the XML-to-EDI (X2E) mapping which is deployed on the XI server( which is mentioned in the Seeburger AS2 Adapter module parameters).

To identify the cause of the issue, we tried checking if the correct version of the 856 message has been imported into the BIC Mapping Designer and if the mapping was generated using the correct 856 EDI Message. Everything looks fine. But still, the issue persisted.

This issue was fixed by modifying the XML version of the X2E mapping, by going through the following steps.

Firstly, export the X2E map in the BIC Mapping designer onto your local PC. This gets exported in the form of an XML file.

image

Now, open the XML file of the X2E mapping, whose name would be the same as that of the actual mapping in the BIC Mapping Designer and search for the logic to create the SE Segment, which would be at the end of the file.

image

The above screenshot shows the logic in the XML file to create the SE Segment. Now, we see that the logic exists here only for creating the D_329 field, which is the Transaction Set Control Number in the SE segment. Generally, as per what we saw with the other mappings as well, is that, the logic in the SE segment is present only for the D_329 field, but still, the D_96 field is populated with the correct segment count.

Now, to fix the issue, we have just added an extra piece of code for the creation of D_96 field (The logic here is that, if the D_96 field is present in the source payload, then, this value should be copied into the target payload), as stated below.

The logic is as below:

"if existsourceField("ISA.GS.ST.SE:96") && trim(ISA.GS.ST.SE:96) != "" copy ISA.GS.ST.SE:96 to LIST.S_ISA.S_GS.S_ST.S_SE.D_96:value;

endif"

image

Save the changes and then re-import it into the BIC Mapping Designer. Then generate the ".sda" file and have it deployed in the XI System.

I hope this proves useful to you people. I will update this blog with more issues related to the X2E / E2X Conversion issues

New administration tool in PI 7.1

New administration tool in PI 7.1

This blog applies to SAP PI7.1, SAP MDM7.1 and SAP CE7.1.

With PI 7.1 Visual Administrator(VA) is not available any more. web tool NWA instead.
SDM is not available. Use JSPM to deploy new package or patches.
And there is a Web based Management console, which is very useful for MDM 7.1. You can access it by http://:513/. You can use it to start/stop MDM 7.1 instance on UNIX/Linux system.

sapmc

For PI7.1, You can start/stop the instance, or server process, and monitor it.

There are lots of functions in NWA, which you did it with VA before.

1, Configuration Management -> Security -> Authentication
change authentication methods( for SSO).

2, Configuration Management -> Infrastructure -> Application Resources

manage JDBC drivers, JDBC data sources.

3, Problem Management ->Logs and Traces -> Log Viewer and Log Configuration

Consuming Webservices with tag in WSDL using ABAP

Consuming Webservices with tag in WSDL using ABAP


Consuming Webservices with tag in WSDL using ABAP

Have you ever generated a consumer proxy for a .NET web service and ended up with an empty output. I did and I found a few others on SDN who did. After you look around some more, you might find that the consumer proxy generation doesn't support certain XML tags in the WSDL. One of these Tags is tag. This blog gives you a step-by-step on how to consume such a web service.

Background

There are several weblogs and help files describing the process of how to consume a web service in ABAP. I followed one of these guides to consume a web service that was generated using .NET in our company. The web service needs an input date and produced a table output of five fields per row. Step 1 was to generate a consumer proxy using the URL for the WSDL of the .NET web service. The process worked like good cake and I had a consumer proxy with the ABAP class generated and activated. I was anxious to test it. I clicked the test icon in SE80 to realize that I needed a logical port. With some more help from the useful blogs on SDN, I was running SOAMANAGER to create my logical port. This was the first stumbling block where the logical port creation based on URL didn't work and the SDN blogs didn't give me a solution. I was able to create the logical port using the manual configuration option (details to follow in a future blog). Back to testing the consumer proxy, I fill in the date in system generated XML input template and Execute. No errors or SOAPFaultCode1 messages finally. I had done it. I clicked on the Output tab to see what I got back. My excitement quickly fizzled away to see a blank page. Then I checked the Original Response tab. The XML response from the web service was all there. Why was the output blanks and how do I get to the response?

Root Cause for blank output

A consumer proxy does three main jobs:

  • 1. Converts input parameters from ABAP format to XML format
  • 2. Invokes the web service with XML request and receives output in XML format
  • 3. Converts received XML response into ABAP data types

The ABAP to XML and XML to ABAP conversions of Step 1 and Step 3 are done using Transformations (extensive help available in SAP help). These transformations are generated during the consumer proxy generation process. In my scenario above the transformation for Step 3 did not give me the output I was looking for. It turns out that the cause of the problem was the web service WSDL itself. The WSDL used tag in the description of the output message. tags are used to make the output message description extensible, but are not supported by the consumer proxy generation wizard. I did not have the option to ask the .NET team to change their web service, so I decided to use the consumer proxy to get the XML response from the web service and translate the XML response to ABAP using a simple transformation myself.

Solution - Call a webservice with XML input and get XML output back

The solution doesn't go into the details of XML programming or input/output conversion of ABAP to XML or XML to ABAP. There are several blogs around that topic on SDN. What I describe here is an ABAP class that calls the consumer web service with input XML request and receives the XML response. The XML response can be processed by the calling ABAP program as needed.

ZWS_CALL ABAP Class

Create a class ZWS_CALL with one method as shown below. No attributes are needed.

Method signature

Importing

CLASS

TYPE

SEOCLNAME

Importing

METHOD

TYPE

SEOCLNAME

Importing

LOGICAL_PORT

TYPE

PRX_LOGICAL_PORT_NAME

Importing

INPXML

TYPE

XSTRING

Request XML

Exporting

RESXML

TYPE

XSTRING

Response XML

Exception

CX_AI_SYSTEM_FAULT

Application Integration: Technical Error

Method Code

method WS_CALL.
DATA:
lr_proxy TYPE REF TO cl_proxy_client,
lr_reader TYPE REF TO if_sxml_reader,
lr_writer TYPE REF TO cl_sxml_string_writer,
lr_ws_payload TYPE REF TO if_ws_payload,
lr_prot_payload TYPE REF TO if_wsprotocol_payload,
lr_request_part TYPE REF TO if_sxmlp_data_st,
lr_response_part TYPE REF TO if_sxmlp_data_st,
lt_parameter TYPE abap_parmbind_tab,
lr_classdescr TYPE REF TO cl_abap_classdescr,
lr_error_part TYPE REF TO if_sxmlp_data_st,
lr_app_fault TYPE REF TO cx_ai_application_fault,
extended_xml_handling TYPE abap_bool,
org_request_data TYPE xstring,
org_response_data TYPE xstring,
response_data TYPE xstring,
error_data TYPE xstring,
exception_class_name TYPE string.

* request_data = inpxml.
cl_proxy_st_part=>create_for_clas_method(
EXPORTING
class = class
method = method
for_serialize_request = abap_false
for_deserialize_request = abap_true
for_serialize_response = abap_true
for_deserialize_response = abap_false
extended_xml_handling = extended_xml_handling
IMPORTING
request_part = lr_request_part
response_part = lr_response_part
param_tab = lt_parameter
).

TRY.
* map xml to abap data
lr_reader = cl_sxml_string_reader=>create( inpxml ).
lr_request_part->deserialize( reader = lr_reader ).

* create proxy
CREATE OBJECT lr_proxy
EXPORTING
class_name = class
logical_port_name = logical_port.

lr_prot_payload ?= lr_proxy->get_protocol(
if_wsprotocol=>payload ).

lr_prot_payload->announce_payload_consumption( ).

* call proxy
lr_proxy->if_proxy_client~execute(
EXPORTING
method_name = method
CHANGING
parmbind_tab = lt_parameter
).


* get payload
IF NOT lr_response_part IS INITIAL.
* try payload protocol
TRY.
IF NOT lr_prot_payload IS INITIAL.
lr_ws_payload =
lr_prot_payload->get_sent_response_payload( ).
IF NOT lr_ws_payload IS INITIAL.
org_response_data = lr_ws_payload->get_xml_binary( ).
ENDIF.
ENDIF.

CATCH cx_ai_system_fault. "#EC NO_HANDLER
ENDTRY.


ENDIF.
CATCH cx_ai_application_fault INTO lr_app_fault.


IF lr_prot_payload IS BOUND.
* read response payload
TRY.
lr_ws_payload =
lr_prot_payload->get_sent_exception_payload( ).
IF NOT lr_ws_payload IS INITIAL.
org_response_data = lr_ws_payload->get_xml_binary( ).
ENDIF.
CATCH cx_ai_system_fault. "#EC NO_HANDLER
ENDTRY.

ENDIF.


* get type description of application fault
lr_classdescr ?= cl_abap_typedescr=>describe_by_object_ref(
p_object_ref = lr_app_fault ).

* get type name
exception_class_name = lr_classdescr->get_relative_name( ).

* base class cannot be mapped to XML for no stylesheet can be
* generated
IF exception_class_name = 'CX_AI_APPLICATION_FAULT'.
RAISE EXCEPTION TYPE cx_ai_system_fault
EXPORTING
textid = cx_root=>cx_root
previous = lr_app_fault.
ENDIF.


CLEANUP.
IF lr_prot_payload IS BOUND.

* try to get original payload, even in case of errors
TRY.
lr_ws_payload = lr_prot_payload->get_sent_response_payload(
).
IF NOT lr_ws_payload IS INITIAL.
org_response_data = lr_ws_payload->get_xml_binary( ).
ENDIF.
CATCH cx_ai_system_fault. "#EC NO_HANDLER)
ENDTRY.

ENDIF.
ENDTRY.
resxml = org_response_data.

endmethod.

Sample program that uses the ZWS_CALL

constants:
lp TYPE prx_logical_port_name value 'Logical Port Name',
cl TYPE seoclname value 'Generated Consumer Proxy Class Name',
md_cdata TYPE seoclname value Generated method in the consumer proxy class'.

* Step 1

CALL TRANSFORMATION source_transformation
SOURCE root = inpdate
RESULT XML inpxml.

*Step 2
TRY.
CALL METHOD zcl_ws_call=>ws_call
EXPORTING
class = cl
method = md_sdata
logical_port = lp
inpxml = inpxml
IMPORTING
resxml = resxml.
CATCH cx_ai_system_fault .
ENDTRY.

* Step 3
CALL TRANSFORMATION Response_transformation
SOURCE XML resxml
RESULT root = sign_itab.

Conclusion

Even in a less than perfect world, SAP can easily consume web services. ABAP Consumer proxy generated class can be used to accomplish parts of the job that it is intended to do. By combining simple XML processing ABAP commands with the generated consumer proxy, every web service can be consumed in ABAP.

References:

Calling ABAP Proxies using SOAP and HTTP Adapters in SAP XI3.0/PI7.0

Calling ABAP Proxies using SOAP and HTTP Adapters in SAP XI3.0/PI7.0

Recently I come across a situation where in I need to post a message coming from an ERP system to SAP ABAP proxy using SAP XI/PI. Well then we all say we can use XI adapter for the same. However, the business requirement is something different here. Purchase Orders are coming from ERP as a single message and on the SAP ABAP Proxy side, the business needs each purchase order (pertaining to purchase order number) as a separate message for better monitoring/tracking purposes.

Earlier there was a BPM setup (with multimapping concept) for this scenario and its working fine with a performance penalty. And more over these days the transaction numbers have increased in a huge number causing BPM timeouts, performance. So we need an alternate way to avoid this.

The messages (purchase orders) can be split using Multi mapping concept (with BPM or without BPM). But with BPM, it is causing performance. If we go ahead without BPM the fundamental obstacle is XI adapter doesn’t support multimapping as it doesn’t belong to AE.

How about SOAP adapter? Can’t we send a SOAP message to local Integration Engine (IE) of Application System? I searched forum… (As I remember there was already a blog by Stephan for this) Using the SOAP inbound channel of the Integration Engine.

So, I did a trial by sending a message to local IE of Application System using a SOAP client (such as XML Spy). Surprisingly, the message reached local IE. I tried the same from SAP XI/PI using SOAP adapter. With SAP PI Soap Adapter as well, the message reached local IE. But the message is in general SOAP format (i.e., SOAP Header and SOAP Body). Hence this message is failing in Application System. I raised a thread on SDN, and realized that the SAP XI/PI SOAP adapter doesn’t have XI3.0 message protocol capabilities for communicating with ABAP proxies until the release of PI7.1 (Enhancement with PI 7.11, How To Set Up the Communication between ABAP Backend and SOAP Adapter using XI Protocol).

How about HTTP Adapter? Even we can not use HTTP adapter as it doesn’t belong to AE. But using HTTP adapter, at least we can post a message to the ABAP Proxy (Can we? Yes we can). But since HTTP is ABAP based adapter, it doesn’t support multimapping.

So finally, I came up with SOAP adapter with plain HTTP request (Conversion Paramets in SOAP CC, check “Do Not Use SOAP Envelope”). With this, I am able to send messages to HTTP plain adapter of Application system and obviously the SOAP adapter supports multimapping.

----------------------------------------------------------------------------------------------------------------------------------------------------------------------

Receiver SOAP Communication Channel Parameters

Target URL:

http://:80/sap/xi/adapter_plain?namespace=&interface=&service=&party=&agency=&scheme=&QOS=EO&sap-client=&sap-language=EN

e.g.,

http://ECCHOST:8020/sap/xi/adapter_plain?namespace=http://ABAPProxy.com&interface=ABAPProxy_IB&service=BS_ABAPProxy&party=&agency=&scheme=&QOS=EO&sap-client=150&sap-language=EN

Provide User Authentication information (username and password)

Conversion Parameters:

Check “Do Not Use SOAP Envelope”

Check “Keep Headers”

----------------------------------------------------------------------------------------------------------------------------------------------------------------------

For ABAP Proxy Call with HTTP Adapter , use the same above URL

Limitations:

1. Since we are sending the message to inbound HTTP adapter on local IE of Application system, we can't send attachements.

2. Message size might be limited for the inbound HTTP adatper (say max. 20 MB per message, this is what as per my test results)

3. Since it is a plain HTTP request (with textual data) to the inbound HTTP adapter on the application system, the SOAP header/body from XI will not be passed to application system. So, the application system can not make use of custom headers such as dynamic configuration etc ...including standard SOAP header from XI/PI

File Conversion using 'Nodeception'

File Conversion using 'Nodeception'

The Background Story:

The subject is of content conversion in the File Receiver Adapter. For those who have read the SAP help; it states that the adapter expects the XML to be of the below format;

...



value

value

value





value



...

...

So ideally, the adapter doesnt expect an XML with hierarchies???

The Scenario:

The expected output file format (taking a real time scenario from a utilities industry) needs to be as follows;

HEADER - Fields of Header (1-1)

(TRANSACTION - Fields of Transaction

METERPOINT - Fields of Meter Point

ASSET - Fields of Asset

REGISTRATION - Fields of Registration

READING - Fields of Reading)

TRAILER - Fields of Trailer (1-1)

A sample file output;

HEADR,H1,H2,H3

TRANS,T1,T11,T111

METER,M1,M11,M111

ASSET,A1,A11,A111

REGST,R1,R11,R111

READG,RD1,RD11,RD111

TRANS,T2,T22,T222

METER,M2,M22,M222

ASSET,A2,A22,A222

REGST,R2,R22,R222

READG,RD2,RD22,RD222

....

....

TRAIL, T1

NOTE: TRANS to READG segments will be a repeating set

The Design:

When we start the design, to suit the file content conversion the ideal Data Type would be as below;

image

The above exactly fits the expected XML format for the File adpater to perform content conversion.

Disaster strikes:

Using the above message structure in the mapping will reveal a glitch.

image

You would notice that all the TRANS, METER etc nodes are now collated together. But this is not what we want since we need first TRANS followed by the first METER and so on.

The above is due to a context issue. The truth is, I have never been able to figure out a context mapping to handle the above. Or maybe even if I might, would it prove to to be too much of an effort?

So this is what I propose;

Create your Data type as follows;

image

Carry on with your mapping. This time since we have introduced the BODY_NODE, it will ensure that the context is maintained.

The output of the mapping will now be;

image

Everything is as expected. Only thing pending is the File content conversion to create the output file.

NODECEPTION aka Node Deception:

To have the XML prepared as expected by the File adapter, we will use a simple trick I have come to call NODECEPTION = NODE + DECEPTION.

Why NODECEPTION?

Because, here we will trick (DECEPTION) the File adapter by removing the BODY_NODE node (NODE) from the XML and provide it with the structure as expected for content conversion.

Use the java code as per this link.

The Java code will remove all instances of the BODY_NODE tag from the XML.

Note: For PI 7.1, I have used parameterization to make the code dynamic and hence reusable. If using XI 3.0 or PI 7.0, you can remove the parameterization snippet from the code.

Add the java mapping in your operation mapping. You will have the java mapping executed after the original graphical mapping.

image

The output would be as below;

image

Now that we have the resulting XML as above, it's simple FCC parameters that can create the output file.

The Next Steps:

The above 'Nodeception' method is what I found to be the easiest solution in such a scenario. Do you have a better and easier way? Or do you know how to manipulate the FCC parameters to handle hierarchy?

If you do, I request you to document your solution in this Wiki.

Build a SAP MDM system on Linux platform

Build a SAP MDM system on Linux platform

Environment: MDM 7.1, CentOS 5 x86_64,Oracle 10.2.

CentOS is one of the open source Linux distribution. you can install MDM system on it. 2G memory, 15G disk space should be enough for test or training.

SAP Netweaver Master Data Manager is a part of SAP NetWeaver. But It's a bit different to install MDM from NW7.0 or PI7.1. The NW 7.0 install master can do everything, you don't need to worry about Oracle installation or configuration. But for MDM, you have to install oracle, and client, then create a database, before the installation.

MDM console and client is only available on windows. All We need to do is to install MDM server component: MDS, MDIS and MDSS.

General Procedures

1, Install Oracle and client.

2, Create database. You must use Unicode for Database Character Set.

DB seeting

3, Configure listener and tnsnames.

4, Execute MDM install master. Input directory for Oracle home, client and SQLplus. And you need do this: ln -s libclntsh.so.10.1 libclntsh.so here.

mdm install

5, Connect to database with SAP MDM console.

TroubleShooting

1, MDM console error log 'Oracle Initialization Failed -- likely missing libraries/dlls'.

Check whether MDM system administrator adm has read and execute right to oracle client directory.

2, MDM console error log 'Service 'M00', Schema 'system', ERROR CODE=12154 ||| ORA-12154: TNS:could not resolve the connect identifier specified OCI Attach, try again ...'

Logon as adm, check tnsping .
adm should have read permission to %ORACLE_HOME%/network/admin.

PI collaboration and workflow interaction as future trend

PI collaboration and workflow interaction as future trend

A lot of my projects recently where all about workflow. HCM, the Human Capital Management module with ESS, MSS and Universal Worklist is an example. SRM and Purchasing is another, often requested project. And if you look at PI-based projects you can see a rise of applications that uses BPM and „human intervention" the dramatic sounding term for BPM-based workflow where user interact with the Process Integration flows.

I noticed the need to discuss the impact of all these "little" projects in broader terms with the customers, to put these activities for their companies in perspective in terms of direction and investment. And I thought I would like to share a summary of thoughts with the community.

On most companies, the so called „silo applications", the optimization of classic modules like MM, SD or PP is maturing and coming to a certain end. New applications have a shorter, more direct focus (i.e. „Apply for Vacation" rather than „Time Management") and in this focus, you often have the requirement for human interaction.

When discussing these project with customers, I advice them not to see only the distinctive single project, but also the overall integration of all these single projects into a Process Design Strategy

Most of these new projects have a Web Dynpro ABAP based frontend component. WD ABAP has quickly proven to be a well suited component to create quick "rich client" based user applications. When there is an Enterprise Portal involved, the integration of all applications is with workflow component is usually done by using Universal Worklist. From the user view, all process interactions starts and ends in a UWL.

From a system perspective, SAP Process Integration (PI) with their BPM-component can run the technical side of the integration extremely well.

If you look at the situation of most companies, business process across boundaries with human interaction is a logical step. Process optimization, which should always be accountable for at the end of an IT project, leads to the optimization of a series of single events, coupled into a whole process. And this is exactly the business reason for small, fast and people driven software design.
If the process is not integrating, not eliminating media breaches and truly people centric ( the real "ease of use") , you will gain no benefits. This goal should be always present in these projects.

Workflow Grid

When you have a lot of processes, distinctive Web Dynpro-Applications and Web Services used for interaction with SAP backends, the more daunting task is to manage all these processes. The question is, how are all these development activities organized?

This is where the PI Enterprise Service Repository comes into play. The Enterprise Repository is the central place to store information about data, processes, interfaces, locations and process descriptions. The Enterprise Repository and the Service Registry for Web Services is a powerful extension of the developing process itself.

Workflow based processes should be part of a strategic movement in the enterprise towards a process oriented, modeled business design. This is for sure an evolution and no big bang project. Experience needs to be made by all project members. Developer, business owner and technical implementation needs to slowly grow into a central element for all future projects.

Workflow will integrate transactions, interfaces and people inside and outside of the company to overall business processes. This is the real new application in the next decade and you will find no other platform with this level of integration. The sum of all these components and the art of mastering it makes the new IT architecture for the next decade.

Hacking SAP PI Service user password

Hacking SAP PI Service user password

People who have worked since ramp-up XI3.0 or earlier generally know the Ins & Outs of XI3.0 administration. Back then, It was an all-in-one role (all-win roles we used to call it) where the 1 man army used to install, develop, take the objects all the way through production, go-live & support. But with SAP XI/PI widely accepted as integration broker & ESB, it became the responsibility of NW Administrors aka basis teams to maintain XI/PI systems and there was a clear distinction in the roles & responsibilities of PI developer and administrator.

When messages from Adapter engine just vanish in the vaccum without reaching integration server, IDOCs don't reach the target systems, logs & traces not active, a seasoned XI/PI consultant will be tempted to go to SXMB_ADM or IDX1/2 to check if the post-installations were performed properly. But with limited authorizations & per development process, have to raise an issue for NWAdmin to figure it out.

One such time, SLDCHECK failed and i had to wait for days but the issue was still un-resolved. I wanted to get to the roots of it and check the configuration but i didn't have the authorization. So i started debugging SLDCHECK and i came across PIAPPLUSER password. [Generally Administrators try to keep the passwords consistent or atleast logical in the landscape. (un)luckily, they had the same password for PISUPER]. I jumped out in joy like a kid who found a bag of candies hidden right under his desk. I hacked-in, found the issue and requested NWAdmin to check this specific configuration and they fixed it.


Hacking in PI 7.0

LCR_LIST_BUSINESS_SYSTEMS uses the configuration maintained in SLDAPICUST (TCode) to access the SLD and get the list of Business Systems. For local SLD installations, it uses SLDAPIUSER. But for Central SLD’s, SAP recommends to replace SLDAPIUSER with PIAPPLUSER in SLDAPICUST. (as per configuration & post installation guides).

Refer to Section 2.4 Basic SAP System Parameters & Section 5.17.1 Performing PI-Specific Steps for SLD Configuration for more details on Maintaining SLD connection parameters.

SLDAPICUST

Figure 1.0 SLDAPCUST Configuration in PI System

LCR_LIST_BUSINESS_SYSTEMS function Module can be hacked to get the PIAPPLUSER password& set a Breakpoint at line 67.

LCR_LIST_BUSSYS_Code_Breakpoint.JPG

Figure 2. Breakpoint in LCR_LIST_BUSINESS_SYSTEMS

create object accessor.
accessor->set_tracelevel( tracelevel ).

PIAPPLUSER_Password.JPG

Figure 3.0 PIAPPLUSER password hacked

Caution: Changing configurations by using the Hacked users/passwords is strongly discouraged.

Word to SAP: SAP can take it as a positive feedback & release a note to enrypt the password.

Links of Help

Change PI Service user passwords with caution

SAP NoteNo: 999962 : PI 7.10: Change passwords of PI service users
SAP NoteNo: 936093 : XI 7.0 : Changing the passwords of XI service users

Continuation: Real Customer Scenarios with SAP NetWeaver Process Integration 7.1

Continuation: Real Customer Scenarios with SAP NetWeaver Process Integration 7.1


Customers continue to adopt SAP NetWeaver Process Integration (SAP NetWeaver PI) 7.1 including enhancement package 1 (EHP 1) in their productive landscapes. As of December 2009 already around one third of all customers of SAP NetWeaver PI are life on either SAP NetWeaver PI 7.1 or EHP 1 for SAP NetWeaver PI 7.1. I. e. as of December 2009 more than 760 customers use SAP NetWeaver PI 7.1 (including EHP 1) productively, distributed over more than 1000 live installations of SAP NetWeaver PI 7.1 or EHP 1 for SAP NetWeaver PI 7.1.

With this strong adoption of SAP NetWeaver PI 7.1 including EHP 1 I would like to provide you here with more examples of real customer scenarios in which SAP NetWeaver PI 7.1 or EHP 1 for SAP NetWeaver PI 7.1 is used productively. As in the previous information about real customer scenarios with SAP NetWeaver PI 7.1 I would like to share with you customer examples from different industries. In the new presentation about real customer scenarios with SAP NetWeaver PI 7.1 including EHP 1 you can find seven customer examples from the following industries:

  • Insurance
  • Retail
  • Measurement Instruments, Process Industry Automation
  • Chemicals, Energy, Real Estate
  • High Tech
  • Materials Technology
  • Oil and Gas

The presentation about the life customer examples includes information about the scenarios that these customers have implemented as well as the main benefits that are provided by SAP NetWeaver PI 7.1 including EHP 1 to the customers. The system landscape of our customers is heterogeneous and can consist of many SAP as well as non-SAP applications. Thus SAP NetWeaver PI 7.1 including EHP 1 is used productively for the integration of non-SAP systems with non-SAP systems, non-SAP systems with SAP applications, as well as SAP- with SAP systems. And more and more customers choose SAP NetWeaver PI 7.1 as their central and strategic integration platform and replace and migrate 3rd party middlewares to use one integration platform, namely SAP NetWeaver PI 7.1, for both SAP and non-SAP systems.

SAP NetWeaver PI 7.1 including EHP 1 is used for a broad range of business scenarios by our customers. Since SAP NetWeaver PI 7.1 is, besides SAP NetWeaver Composition Environment (CE) 7.1, essential part of SAP's SOA (service-oriented architecture) infrastructure, customers can use SAP NetWeaver PI 7.1 to apply SOA principles. To make it more transparent, I give you here some examples about SOA principles that are commonly adopted by customers:

  • Apply a governed and model driven approach for service provisioning and service consumption
  • Design SOA artifacts in the Enterprise Services Repository
  • Leverage re-usable Enterprise Services (either provided by SAP or custom built)
  • Publish services (SAP and non-SAP) to Services Registry
  • Service enable legacy applications

Troughout the presentation about productive customer scenarios with SAP NetWeaver PI 7.1 including EHP 1 you can find examples on how these customers apply those SOA principles and pave their way to a service-oriented architecture.

At the end of this blog I would like to point you to two further sources of information:

And stay tuned for the upcoming next version of SAP NetWeaver PI. At TechEd 2009 we already announced the most important planned benefits, more details will be provided soon. The ramp-up for the next version of SAP NetWeaver PI is currently planned for the second half of 2010.

SAP Developer Network SAP Weblogs: SAP Process Integration (PI)