Idea Place and a case for SAP PI

Almost a year ago, I posted a blog titled 'Contribute to better a Product? Do you have ideas for SAP PI?'. The objective was to get a dedicated page for Process Integration so that consultants, users and customers, if interested could post their thoughts and suggestions on improving the product.

Currently Process Integration and the future is under review. The wait has exceeded a year now but the interest of many fellow community members in this area has not diminished. Recently we had Prateek Raj Srivastava, open a wiki page for suggesting improvements to PI. I also met with a customer who said he has initiated a dialogue with SAP in suggesting improvements to the product. There are user groups who are focused on product improvements to PI but then it is a closed network and not all can discuss and contribute their ideas.

We need a place for ideas and the question I have is 'Is Idea place the right place for this specific need?'. I hope someone managing Idea place can answer it. I personally believe that it is since it will be open enough for consultants, customers and other stakeholders. But if I am proven wrong, then the wiki page started by Prateek will need to be publicized more so as to bring it to the attention of contributors and get the ideas flowing.

PI has matured over the years but there is still a lot that has to go into the product to make it an ideal fit for a multitude of requirements. The community is keen to contribute and I am sure SAP will be happy to listen.

Handling two different messages within mapping

In one of our project assignments, we had a kind of strange, or a rather tricky requirement. This is was supposed to be a normal XML file to a web service scenario, but the situation was that, the source system gives an XML file, which contains customer’s complaints / acknowledgments. The file naming scheme for both complaints and acknowledgements was the same, and we had to process only those XML files that had complaints in them. There was no way to identify the type of data the files had, as both the complaints and acknowledgements had same file name, but different root tags.

After a lot of research, the only solution we came up with is to have two different outbound interfaces, one for complaints, and the other for acknowledgements. Firstly, pick the file, convert it into a string using a java mapping, then use the graphical mapping to split it into complaints / acknowledgements.

Below are the steps:

First, we have to create a dummy data type with just one element in it, as shown in the below screenshot.

Then create the Message Type and Message Interface for this data type and set the direction of the message interface as Outbound.

This interface is going to be used as the Sender Message Interface.

Then import the XSD / create the data type & message types for the complaint and acknowledgments. Create their message interfaces accordingly with direction as inbound.

In the next step, we have written a simple java mapping program to convert the given XML input into String and import it into the “Imported Archives”. The output of this java mapping is passed to a graphical mapping, which creates the XML message for complaint / acknowledgement based on the input.

Java Mapping:

The graphical mapping is a one-to-many mapping, with the one outbound interface, and two inbound interfaces, with the occurrence of the inbound interfaces defined as “0..1”. The inbound interface is our dummy inbound message interface, and the inbound interfaces would be the Complaints and Acknowledgments interfaces.

The graphical mapping takes the XML input in String format, uses a substring UDF fetch the values.

The UDF “getString” in the above screenshot takes the XML string, the index of the beginning and ending tags of the element “messagetype” as input and returns the actual value of this element called “messagetype”. If this value is “Complaint” then, the target structure “ichicr”, which is for complaint is created, else, the other structure “ichicsrack”, which is an acknowledgment.

Code for getString():

The same logic is used to retrieve the values of other elements, and are populated into the target element as shown in the below screenshot.

Then, create the operations mapping as shown below:

This mapping output returns either of Complaint / Acknowledgement messages at a time and the other message would be empty. Create the configuration scenario in the Integration Directory, with one sender and two receivers.

Result of Operations Mapping:

1. For Complaints:

2. For Acknowledgments:

Hope this blog is helpful.

Idea Place for SAP NetWeaver Process Orchestration - Process Integration

Do you have ideas and feedbacks to improve PI and BPM? Now, you can submit them and solicit opinions from others.

When working with PI and BPM, I am sure we all come across situations where we feel if there are certain features or capabilities in PI and BPM, it will improve our usage of the products. These ideas can be in areas of development, monitoring, administrations, or anything.

Now, there is a place where we can submit our ideas and feedbacks. We can also view and comment on ideas and feedbacks submitted by others...and, even vote for our favorite ones. And, it is possible we can even see our ideas actually to become part of the product.

To submit and view ideas and feedback, you can access the links below:
Idea Place for PI
https://cw.sdn.sap.com/cw/community/ideas/sap_netweaver/sap_netweaver_process_orchestration_process_integration

Idea Place for BPM
https://cw.sdn.sap.com/cw/community/ideas/sap_netweaver/sap_netweaver_process_orchestration_business_process_management

SCADA Integration with SAP using SAP PI – Part 3

SCADA Integration with SAP using SAP PI – Part 3 –Interface

In my previous blogs, we saw Systems Integration and how they communicate in exchanging Tag values using a testing tool. Let me discuss about the Interface with actual systems involved. I mean Sap ERP TSW, SAP PI, OPC Gateway Systems, and SCADA Servers (OPC Server). Based on the two previous blogs I hope you got an idea on systems involved.

If you look in to the Business Case, Interface was demanding communicate with Decentralised SCADA servers. They were 10 instances of SCADA Servers in the Landscape, primarily provided by INVENSYS and TELVENT. All the instances will contain same Tags and Values based on the periodic synchronization of the instances. Our requirement was to fetch the values for specific tags based on the availability of the instances. There were some primary instances which will be given priority and rest of them as redundant instances.

IMG 3.1 Interface Technical Connectivity

In ERP TSW module this Read Interface has to be executed at a scheduled time every day to get the Tag values and store it in Z Tables. These values are utilized by the module to finalise Allocations, based on which Ticket Creation, Delivery Creation and Billing Creation happens [Refer IMG 1.1: Business Process] in my first blog of this series.

So to fetch the tag values in ERP all the Tags relevant to the SCADA server instances are stored in Z Table and used to create a proxy request for Read operation with the relevant tags.

Synchronous Communication

Onward Journey -->

  1. In ERP - TSW module a background program which executes every day at scheduled time sends a client proxy request containing SCADA TAG’s to SAP PI.
  2. In SAP PI Read Interface is executed which receives client proxy request and transforms message using operation mapping.
  3. Resultant of this mapping will be mapped as an Operation – Message called ReadSoapIn of Web Service provided by OPC GATEWAYs.
  4. Using conditional routing in Read Interface, SAP PI sends to ReadSoapIn operation to available OPC GATEWAY using a SOAP request.
  5. In OPC GATEWAY a corresponding Web Service will receive SAP PI’s ReadSoapIn request, and sends the request to OPC SERVER in SCADA using .NET program and COM/DCOM technology.

Synchronous Communication

<-- Return Journey

  1. OPC SERVER processes the request and reads the values related to the requested TAG’s, and sends response back to OPC GATEWAY.
  2. OPC GATEWAY will send the response using Operation – Message ReadSoapOut to SAP PI.
  3. SAP PI receives the response using ReadSoapOut Message, this message gets transformed to ERP client proxy response.
  4. ERP client proxy program update relevant table with TAG Values for all OPCSERVERS available.

TAG Values in table will be used for finalizing allocations, ticket creation, delivery and billing.

Now will see abstract of objects which are needed in SAP PI.

Sender and Receiver Message Structures with Mappings in PI:

IMG 3.2 Message Mapping Request

IMG 3.3 Message Mapping Response

Receiver Determination:

In ERP a program controls the flow of Interface, to which Gateway and to which SCADA (OPC Server) request is sent. Based on the source field value the receiver OPC Gateway is determined, and Web Service specifically running for particular OPC Server will be triggered in OPC Gateway based on the request XML.

IMG 3.4 Receiver Determination

Receiver Communication Channel:

IMG 3.5 Communication Channel Receiver

XML - Request:

IMG 3.6 XML Request

XML – Response

IMG 3.7 XML Response

I hope this blog series helped you in understanding SCADA systems integration with SAP.

SCADA Integration with SAP using SAP PI – Part 2

SCADA Integration with SAP using SAP PI – Part 2 – Systems Integration

In my previous blog, I briefly explained about the systems involved and the business case. Let me share how systems Integration happens in this part. On the high level SAP ERP system will request Tag values for particular SCADA OPC Server Instance, in response OPC Server will reply with Tag values, Quality of the Tag value, Quality of the Tag, OPC Server Status, and Time Stamp.

A deep drive in to the Systems Integration:

As SAP ERP system makes a request and gets the response, so we made this as Client Proxy communication to SAP PI. On the other hand SAP PI system is connected to OPC Gateway (Technically: OPC Client System) across the Firewall. In OPC Gateway system a standard Web Service is exposed, this will be provided by vendors who supply .NET Wrapper software like MatrikonOPC/Advosol. So SAP PI will consume this Web Service at receiver end. A SOAP request sent by SAP PI will be received by service available in OPC Gateway, now this request has to be passed to OPC Server. In OPC Gateway Web Service request is mapped to a .Net program which will convert HTTP request to COM/DCOM Request and sends the same through ISS to OPC Server. OPC Server will process the request and reply with the necessary Tag Values to OPC Gateway, OPC Gateway will map the response to SOAP response, which will be reached to SAP ERP through Client proxy response. This is how synchronous communication happens between Integrated Systems.

Systems Integration

SAP ERP:

ERP makes a business request consists of Tag Names:

“A tag name in terms of SCADA represents a variable to hold various quantitative measures like GCV (Gross Calorific Value), NCV (Net Calorific Value), Pressure, Volume, and Temperature. Each quantitative measure will have its own tag name

Because all the data stored in SCADA will be in the form of Tags, ERP system should send relevant Tag Names to get the values. Sample tag names and specific codes for quantitative measures are given in my previous blog.

As ERP sends a Proxy request this could be manipulated as a SOAP request by using SOAP Message Protocol as XI, which will point to AAE (Advance Adapter Engine).

Sample Request XML Message

OPC Gateway:

Client’s landscape has 2 OPC Gateway systems. Communication happens with either of the Gateway’s based on the availability. Each Gateway will be linked to available Instances of SCADA.

OPC Gateways are manipulated to handle SOAP request coming from SAP PI and a .Net Client Application (Wrapper Software) in OPC Gateway converts the call in to D[COM] request and sends to OPCSERVER.

.NET applications can access OPC servers only through a software layer that is usually called a .NET Wrapper. There are no OPC standard specifications for a .NET interface, so different vendors offer .NET wrapper software with much different interfaces and features.

One of the .NET wrapper software we used was ADVOSOL XMLDA.NET wrapper. This supports access to OPC DA and XML DA servers. The Helper classes of wrapper software provide user friendly server access methods for features such as browsing and reading item values.

OPC GATEWAY COMMUNICATION

OPC DA Sample Server & Matrikon OPC Explorer Client Application

OPC Gateway (OPC Client Machine) system with ISS installed and .NET wrapper running can be monitored using application called Matrikon OPC Explorer.

I have simulated OPC Server and OPC Gateway in my desktop using OPC .NET Sample DA Server –Tags and Tag values are maintained here (generally updated by RTU’s) and Matrikon OPC Explorer (for OPC Gateway) to view the tags values by installing .NET framework and ISS on Windows OS.

OPC DA 20 Server

OPC DA Server Client

In this OPC DA Sample Server, we can add New Tags, and edit existing Tag properties like Data type, Quality. If this OPC DA Sample Server is connected to RTU’s will have real Tags names as I mentioned in my first Blog.

Since it is manipulated on my machine to show case the Systems, I could load some dummy Tag names and provide dynamic values in to the Server Instance. [Please note Tag Name in terms of OPC Server is Item ID]

Now let’s see how to access the Tags available in OPC DA Sample Server in to Matrikon OPC Explorer application:

Since I have installed both (OPC DA Sample Server and Matrikon OPC Explorer) on the same host machine, when I open Matrikon OPC Explorer it will automatically detects available OPC Servers on the same host machine.

Now we will see how to add the Tags to view the values available in the OPC DA Sample Server Instance in Matrikon OPC Explorer application.

Press Add Tags which will open another window, this will allow adding of new Item ID: Tag Name to monitor:

OPC Sample Da 20 Server

This application (Matrikon OPC Explorer) need to be connected to the OPCSample.OpcDa.20Server.1, to achieve this press Connect button. Once we press this button you can see the Server is connected and Status as Running on the bottom left window with refresh rate.

OPC Sample SA 20 Server.1

Now we will see how to add the Tags to view the values available in the OPC DA Sample Server Instance in Matrikon OPC Explorer application.

Press Add Tags which will open another window, this will allow adding of new Item ID: Tag Name to monitor:

Matrikon OPC Explorer

Press Browse Menu Item and select Flat Browse as follows:

Matrikon OPC Explorer

You can see all the available Tags/Item ID’s:

Matrikon OPC Explorer

Double click relevant Tags/Item ID you wish to explore, which will be added in to the Tag to be added window.

Matrikon OPC Explorer

Now press File -> Update and Return

Matrikon OPC Explorer

With this action it will add the Tag to the Matrikon OPC Explorer main window and fetches the value available in: OPCSample.OpcDa.20Server.1 as shown below:

Matrikon OPC Explorer

This is how we monitor the Tag values in Matrikon OPC Explorer in OPC Gateway (OPC Client). Other than Matrikon OPC Explorer, you can use any other tools available.

Now we see how to get this value with the Web Service request from a web client like SOAPUI testing tool.

Web Service and Functions:

XML DA Sample Server

With the available Web Service we can Read / Write Tag Values available in OPC Gateway’s. We can utilize GetProperties function to get the Tag’s Properties like Quality and Time Stamp etc. We can utilize GetStatus function to check the status of the Tag value like Active/In-Active.

SOAPUI Request

SOAPUI Response

Once the Web Service is available, this can be used in SAP PI on receiving end. Let us see the Interface details with Read Operation, which will be triggered from SAP ERP TSW in my next blog. I hope this blog provides you little bit knowledge on OPC Gateway and OPC Server Instance.

SCADA Integration with SAP using SAP PI – Part 1

SCADA Integration with SAP using SAP PI – Part 1 - Business case & Systems Involved

Before starting my previous project, I was little sceptic about SCADA (Supervisory Control and Data Acquisition) integration with SAP using PI, whether it is achievable or not. But I had an iota of confidence that this can be done somehow.  With the help of SAP's XMII expert Pandiarajan .P and .NET consultant Mahesh Patel, I could successfully implement several interfaces integrating SCADA. Since there is no reference about this topic in SDN, so I thought of putting my experience in words as a series of blogs about the Integration.

1. SCADA Integration with SAP using SAP PI – Part 1 - Business case & Systems Involved.

2. SCADA Integration with SAP using SAP PI – Part 2 – Systems Integration.

3. SCADA Integration with SAP using SAP PI – Part 3 – Interface.

Business case:

Our client has India’s biggest pipeline connectivity network in Gas distribution, (Downstream – Delivery of Natural Gas through Pipeline & Gas Tankers) which is working under Central Government of India. They have Decentralized SCADA Servers providing by different vendors like INVENSYS and TELVENT. These will get Pressure/Volume/Temperature/GCV/NCV etc values from RTU’s (Remote Terminal Units)/Flow Meters attached across pipelines. Based on the values SCADA will control the GAS flow in Flow Meters remotely. Business case was to fetch these values in to SAP –TSW module and schedule the quantity of the Gas to be delivered on particular day.

Business Process

Now let me introduce you with the landscape systems involved and brief introduction about them.

  • SAP ERP 6.0 EHP 4 TSW Module.
  • SAP PI 7.1 EHP 1.
  • SCADA with OPC DA Server (2.0) – Decentralized
  • OPC Gateways - OPC Clients.

SAP ERP TSW module:

TSW stands for Traders and scheduler workbench. Trader's and Scheduler's Workbench (TSW) provides functions for stock projection and for planning and scheduling bulk shipments using nominations. TSW provides the relevant master data to model the supply chain. The stock projection, planning, and nomination processes enable the scheduler to schedule bulk shipments while taking into account supply, demand, and available transportation.

Processes and Functions

The following planning tools are provided by the TSW application:

  1. Planning (Replenishment Proposals and Safety/Target Stock Calculation)
  2. Stock Projection Worksheet (SPW)
  3. Location Balancing
  4. Worklist

The following scheduling tools are provided by the TSW application:

  1. Three-Way Pegging (3WP)
  2. Berth Planning Board (BPB)
  3. Nomination
  4. Ticketing
  5. Worklist

TSW Processes & Functions

Since Client’s business is to deliver Gas to various dealers, using Pipeline & Gas Tankers, for daily nominations and scheduling the quantity to delivery, this module has been used in the project. [REF: help.sap.com]

SCADA with OPC DA Server (2.0) - Decentralized:

What is SCADA?

SCADA stands for Supervisory Control And Data Acquisition. As the name indicates, it is not a full control system, but rather focuses on the supervisory level. As such, it is a purely software package that is positioned on top of hardware to which it is interfaced, in general via Programmable Logic Controllers (PLCs), or other commercial hardware modules. SCADA systems are used not only in most industrial processes: e.g. steel making, power generation (conventional and nuclear) and distribution, chemistry, but also in some experimental facilities such as nuclear fusion. SCADA systems used to run on DOS, VMS and UNIX; in recent years all SCADA vendors have moved to NT.

SCADA systems are widely used in industry for Supervisory Control and Data Acquisition of industrial processes. Companies that are members of standardisation committees (e.g. OPC, OLE for Process Control) and are thus setting the trends in matters of IT technologies generally develop these systems. SCADA Server’s which our client own was provided by following vendors:

There were multiple instances of these SCADA servers which are decentralized. But servers get synchronized with real time data on periodical intervals.

How to communicate with SCADA?

These SCADA systems include OPC (OLE for Process Control) Servers inside, other than its own Data Sources. OPC Servers are software applications (drivers) that comply with one or more OPC specifications as defined by the OPC Foundation. OPC Servers communicate natively with one or more Data Sources on one side and with OPC Clients on the other. In an OPC Client / OPC Server Architecture, the OPC Server is a Slave while the OPC Client is the Master.

Versions of OPC Servers released so far:

Versions of OPC Servers

Data Format in SCADA:

Data in SCADA is available in the form of TAG’s, something like:

SCADA Tags

For each tag corresponding value will be updated to SCADA database by the RTU’s (Remote Terminal Units) connected to Gas Pipeline. Based on which SCADA will control the flow of Gas through Flow Meters attached to Gas Pipeline. OPC Client machines will request these tag values by specifying the tag name. Client machines got the facility to subscribe the functionality to get updated information of tag values, every second or even shorter time interval.

For further information on OPC Server please refer to MatrikonOPC.com

OPC Gateways (OPC Clients)

OPC Gateways systems are Client machines that will communicate to OPC Server in SCADA system. These Gateway systems with Windows Operating System and ISS installed & running. Communication between the OPC Client and OPC Server is bi-directional meaning the OPC Clients can read from and write to OPC Servers. Classic OPC Servers utilize the Microsoft Windows’ COM/DCOM infrastructure as a means of exchanging data. This means these OPC Servers must run on the Microsoft Windows operating system. An OPC Server can support communications with multiple OPC Clients simultaneously. But latest releases of  OPC Servers supports HTTP Communications so Web Service are provided by OPC Servers can be directly consumed by client applications like SAP PI or ERP directly.

OPC Client – OPC Server Communication

OPC Client OPC Server Communication

[Ref: MatrikonOPC.com]

These are all the systems and brief information about them.

Let us discuss about the Architectural connectivity that we followed in our Systems Integration in my next blog. I hope this blog series helps you get an idea on integration between SCADA and SAP.

PI/XI: NWDS not only for java development but also for PI administration.

In this article I'd like to show two common ways of using SAP NetWeaver Developer Studio (NWDS) for administration purposes of SAP PI.
1. How to deploy JDBC/JMS external drivers (SDA files) without JSPM ?
If you'd like to use any external JDBC/JMS driver in your PI landscape you need to deploy the drivers first using an SDA file. In the latest PI releases you can only do that using JSPM (as SDM is no longer available). However you can also try deploying SDA files using NDWS and below you will find the necessary steps to do that.
Step 1
Create an EAR project.
Step 2
Import the SDA file into this project.
Step 3
Make sure to set the J2EE server in Window - Preferences - SAP AS Java menu.
Step 4
Right click on the SDA file and deploy it.

As you see the server will get restarted and your file will be deployed.

2. How to use NWDS for nice log file monitoring ?
In case something goes wrong with PI you may need to check the log files. You can do that from multiple places like NetWeaver Administrator (nwa) or even from transaction AL11 from your ABAP stack. The fact is that I don't like those two tools at all because they are either slow or show the log files in a very difficult to read format. What about using NDWS for that purpose?
Step 1
Once you see your J2EE servers in the deployment view you can select "Open SAP management console" link.

Step 2
Then in the "log files" link you can see all of your most important logs and their content (nicely formatted) below.

Step 3
The feature I like the most is the automatic refresh. Once you enable it you will see instantly the latest entries to your log files and in most cases this will enable you to find the cause of the issue very quickly.

The conclusion of this short article is - even if you're not a developer you can start using NDWS for other purposes. This will benefit you also in the future as in the future (even as of PI 7.3) you can do some PI configuration in NWDS already as shown in one of my previous articles:
PI/XI: PI 7.3 - accessing PI with NWDS - teaser

Automating Import of IDoc Metadata in Transaction IDX2

Despite all of the advances in Web service and proxy technologies over the course of the past few years, I still find that many customers prefer to use the tried-and-true ALE/IDoc technology stack. Consequently, a frequent administrative headache is the upload of IDoc metadata using Transaction IDX2. In this blog, I will demonstrate a simple report program that can be used to automate this task.

What is IDoc metadata, anyway?

If you haven't worked with the PI IDoc adapter before, then a brief introduction is in order. As you know, all messages that flow through the PI Integration Engine pipeline are encoded using XML. Thus, besides providing raw connectivity, adapters such as the IDoc adapter also must perform the necessary transformations of messages so that they can be routed through the pipeline. In the case of the IDoc adapter, the raw IDoc data (you can see how the IDoc data is encoded by looking at the signature of function module IDOC_INBOUND_ASYNCHRONOUS) must be transformed into XML. Since the raw IDoc data does not provide information about segment field names, etc., this metadata must be imported at configuration time in order to enable the IDoc adapter to perform the XML transformation in an efficient manner.

From a configuration perspective, all this happens in two transactions:

  • In Transaction IDX1, you create an IDoc Adapter Port which essentially provides the IDoc adapter with an RFC destination that can be used to introspect the IDoc metadata from the backend SAP ALE system.
  • In Transaction IDX2, you can import IDoc types using the aforementioned IDoc adapter port. Here, you can import standard IDoc types, custom IDoc types, or even extended types.

If you're dealing with a handful of IDocs, then perhaps this isn't such a concern. However, if you're dealing with 10s or 100s of IDocs and a multitude of PI systems, then this process can become tedious in a hurry.

Automating the Upload Process

Now, technically speaking, the IDoc adapter is smart enough to utilize the IDoc port definition to dynamically load and cache IDoc metadata on the fly. However, what it won't do is detect changes to custom IDocs/extensions. Furthermore, if you have scenarios during cutover which block RFC communications, not having the IDoc metadata in context can lead to unexpected results. The report below can be used to automate the initial upload process or execute a kill-and-fill to pull in the latest and greatest changes. In reading through the comments, you can see that it essentially takes two inputs: the IDoc adapter port defined in IDX1 and a CSV file from your frontend workstation that defines the IDoc types to import. Here, you just need to create a two-column CSV file containing the IDoc type in column 1 and the extension type (if any) in column 2.

REPORT zidx_idoc_load_metadata. *&---------------------------------------------------------------------* *& Local Class Definitions * *&---------------------------------------------------------------------* CLASS lcl_report DEFINITION CREATE PRIVATE. PUBLIC SECTION. CLASS-METHODS: "Used in the selection screen definition: get_frontend_filename CHANGING ch_file TYPE string, "Public static method for running the report: execute IMPORTING im_idoc_types_file TYPE string im_idoc_port TYPE idx_port. PRIVATE SECTION. "Class-Local Type Declarations: TYPES: BEGIN OF ty_idoc_type, idoc_type TYPE string, ext_type TYPE string, END OF ty_idoc_type, ty_idoc_type_tab TYPE STANDARD TABLE OF ty_idoc_type. "Instance Attribute Declarations: DATA: idoc_port TYPE idx_port, idoc_types TYPE ty_idoc_type_tab. "Private helper methods: METHODS: constructor IMPORTING im_idoc_port TYPE idx_port, upload_idoc_types IMPORTING im_idoc_types_file TYPE string RAISING cx_sy_file_io, import_idoc_metadata, remove_idoc_metadata IMPORTING im_idoc_type TYPE string. ENDCLASS. CLASS lcl_report IMPLEMENTATION. METHOD get_frontend_filename. "Local Data Declarations: DATA: lt_files TYPE filetable, lv_retcode TYPE i, lv_user_action TYPE i. FIELD-SYMBOLS: <lfs_file> LIKE LINE OF lt_files. "Present the user with a dialog box to select the name "of the file they want to upload: CALL METHOD cl_gui_frontend_services=>file_open_dialog EXPORTING default_extension = 'csv' file_filter = '.csv' CHANGING file_table = lt_files rc = lv_retcode user_action = lv_user_action EXCEPTIONS file_open_dialog_failed = 1 cntl_error = 2 error_no_gui = 3 not_supported_by_gui = 4 others = 5. IF sy-subrc EQ 0. IF lv_user_action NE cl_gui_frontend_services=>action_cancel. READ TABLE lt_files INDEX 1 ASSIGNING <lfs_file>. IF sy-subrc EQ 0. ch_file = <lfs_file>-filename. ENDIF. ENDIF. ELSE. MESSAGE 'Could not determine target filename!' TYPE 'I'. RETURN. ENDIF. ENDMETHOD. " METHOD get_frontend_filename METHOD execute. "Method-Local Data Declarations: DATA: lo_report TYPE REF TO lcl_report, lo_exception TYPE REF TO cx_root, lv_message TYPE string. TRY. "Create an instance of the report driver class: CREATE OBJECT lo_report EXPORTING im_idoc_port = im_idoc_port. "Upload the set of IDoc types to load into the IDoc adapter; "This file should be a CSV file on the local workstation: CALL METHOD lo_report->upload_idoc_types EXPORTING im_idoc_types_file = im_idoc_types_file. "Import the set of IDoc types: CALL METHOD lo_report->import_idoc_metadata( ). CATCH cx_root INTO lo_exception. lv_message = lo_exception->get_text( ). MESSAGE lv_message TYPE 'I'. ENDTRY. ENDMETHOD. " METHOD execute METHOD constructor. me->idoc_port = im_idoc_port. ENDMETHOD. " METHOD constructor METHOD upload_idoc_types. "Method-Local Data Declarations: DATA: lt_csv_file TYPE string_table, lv_idoc_type TYPE edipidoctp, lv_ext_type TYPE edipcimtyp. FIELD-SYMBOLS: <lfs_csv_record> LIKE LINE OF lt_csv_file, <lfs_idoc_type> LIKE LINE OF me->idoc_types. "Upload the selected file from the user's workstation: CALL METHOD cl_gui_frontend_services=>gui_upload EXPORTING filename = im_idoc_types_file CHANGING data_tab = lt_csv_file EXCEPTIONS file_open_error = 1 file_read_error = 2 no_batch = 3 gui_refuse_filetransfer = 4 invalid_type = 5 no_authority = 6 unknown_error = 7 bad_data_format = 8 header_not_allowed = 9 separator_not_allowed = 10 header_too_long = 11 unknown_dp_error = 12 access_denied = 13 dp_out_of_memory = 14 disk_full = 15 dp_timeout = 16 not_supported_by_gui = 17 error_no_gui = 18 others = 19. IF sy-subrc NE 0. RAISE EXCEPTION TYPE cx_sy_file_io EXPORTING filename = im_idoc_types_file. ENDIF. "Copy the CSV file into the IDoc types table: LOOP AT lt_csv_file ASSIGNING <lfs_csv_record>. SPLIT <lfs_csv_record> AT ',' INTO: lv_idoc_type lv_ext_type. APPEND INITIAL LINE TO me->idoc_types ASSIGNING <lfs_idoc_type>. <lfs_idoc_type>-idoc_type = lv_idoc_type. <lfs_idoc_type>-ext_type = lv_ext_type. ENDLOOP. ENDMETHOD. " METHOD upload_idoc_types METHOD import_idoc_metadata. "Method-Local Data Declarations: DATA: lv_count TYPE i. FIELD-SYMBOLS: <lfs_idoc_type> LIKE LINE OF me->idoc_types. "Process each of the selected IDoc types in turn: LOOP AT me->idoc_types ASSIGNING <lfs_idoc_type>. "Check to see if the metadata already exists: SELECT COUNT(*) INTO lv_count FROM idxsload WHERE port EQ me->idoc_port AND idoctyp EQ <lfs_idoc_type>-idoc_type AND cimtyp EQ <lfs_idoc_type>-ext_type. IF lv_count GT 0. "If it does, go ahead and remove it: CALL METHOD remove_idoc_metadata( <lfs_idoc_type>-idoc_type ). ENDIF. "Now, import the IDoc metadata using IDX_FILL_STRUCTURE: SUBMIT idx_fill_structure WITH port = me->idoc_port WITH idoctyp = <lfs_idoc_type>-idoc_type WITH cimtyp = <lfs_idoc_type>-ext_type AND RETURN. ENDLOOP. ENDMETHOD. " METHOD import_idoc_metadata METHOD remove_idoc_metadata. "Remove the IDoc metadata using IDX_RESET_METADATA: SUBMIT idx_reset_metadata WITH port = me->idoc_port WITH idoctyp = im_idoc_type AND RETURN. ENDMETHOD. " METHOD remove_idoc_metadata ENDCLASS. *&---------------------------------------------------------------------* *& Selection Screen Definition * *&---------------------------------------------------------------------* PARAMETERS: p_idxprt TYPE idx_port OBLIGATORY, p_ifile TYPE string LOWER CASE OBLIGATORY. *&---------------------------------------------------------------------* *& AT SELECTION-SCREEN Event Module * *&---------------------------------------------------------------------* AT SELECTION-SCREEN ON VALUE-REQUEST FOR p_ifile. CALL METHOD lcl_report=>get_frontend_filename CHANGING ch_file = p_ifile. *&---------------------------------------------------------------------* *& START-OF-SELECTION Event Module * *&---------------------------------------------------------------------* START-OF-SELECTION. CALL METHOD lcl_report=>execute EXPORTING im_idoc_port = p_idxprt im_idoc_types_file = p_ifile.

Final Thoughts

SAP NetWeaver 7.3 Goes Green

The concept of energy savings through SAP NetWeaver was first raised during the SAP Influencer Summit in December 2010 by Peter Graf, who used similar data to show how SAP development takes advantage of energy savings possibilities, which result in lower cost for operating our software and reduces carbon emissions.

In this blog I want to explain a bit more in detail how this calculation was done and on which assumptions it is based on. To make my case, I will use the example of SAP NetWeaver Process Integration (PI) software, as it is the area I worked on for many years and which I am most familiar with.

To demonstrate this idea, SAP carried out testing of its new SAP NetWeaver Platform using the SAP Application Performance Standard (SAPS). SAP defined a medium-sized landscape as being 37,500 SAPS for an SAP NetWeaver Process Integration (PI) customer.

Based on this definition, they found that the savings potential for PI is in the region of:

  • 60% less energy consumption or around 32,000 kWh/yr
  • 16 tons of CO2 savings per landscape/year
  • €6.5k saving potential per landscape/year

Medium-sized PI landscapes

As we do not have 100% correct customer data on sizes of PI installation systems, we decided to undertake an expert evaluation of small, medium, and large PI customer installations. With input from the SAP field organization, PI product management, PI sizing experts, and regional implementation team experts, we decided to make the assumption that a small PI customer has installed 5.000 SAPS for production system, 15.000 SAPS for medium and 30.000 SAPS as large customer in production.

Customer landscapes consist not just of of the production environment, but also development and quality assurance systems; so we decided to take the factor 2,5 times of the production system for the complete customer landscape. Usually the quality assurance environment is of the same size of production, whereas development has half the size of production. The result led to the conclusion that a medium sized PI landscape is about: 37.500 SAPS.

Why we calculated with "SAPS"

With the help of SAP Standard Application Benchmark results at www.sap.com/benchmark, statements can be made as to CPU consumption and memory consumption of particular software components. SAPS is an artificial calculation to decouple software from hardware influences. That gave us the possibility to exclude improvements in the hardware environment from the improvements of the software.

How these savings are reflected in the SAP (Quick) Sizing

Improvements achieved on the software level can either mean customers can reduce their hardware demand, or process more messages in regards to PI usage. We did the calculation in a way where we kept the number of messages the same and expected the customer will reduce his hardware. The calculation is based on a mix of scenarios executed as integrated scenarios on a PI system. For the PI centric folks: 75.000 asynchronous messages of 100 KB size and 25.000 synchronous messages of 20 KB size with simple mapping and receiver determination executed in one hour executed on standard PI dual stack set up. That gave a SAPS demand of 15.000 SAPS in a classic PI dual stack usage with 66% hardware usage. Now, that can be executed on AEX as the Java only deployment option of PI with 61% less SAPS and hardware demand. These numbers have been tested and verified therefore the customer and partners will see the same results using SAP PI sizing tool.

How to transfer SAPS to energy consumption, costs of energy, and carbon emissions

All hardware needs different amounts of power resources to achieve a certain number of SAPS. By introducing the SAP Power Benchmark, the approach is that a hardware vendor does not only show that his hardware set up can provide the requested SAPS, but although how much energy input is needed to achieve this. As we could not calculate for all hardware vendors, we decided to take the average measured values for up to date hardware from 2010. With further improvements on hardware these values have to be adopted, but as hardware is in place for several years and older hardware is much more energy-intensive, we think things will be fine for this year.

To calculate the energy costs and potential customer cost savings we took the price of 0,2 € per one kilo watt per hour (kWh) in Germany, which is certainly different for other countries and business customers buying and consuming high numbers of energy. The same is true for the carbon emissions created to produce one kWh, because that is dependent on the energy mix of nuclear power, natural gas, coal, renewal energy in the individual country. For Germany that was 506 grams of carbon per kWh according to BDEW and as the energy mix is not unusual we took this number for the calculations. Result: 16 tons of CO2 savings per landscape/year.

How savings can be achieved

The main reason is that with integrated configurations all steps can be executed on one stack in our case Java. That reduces the persistency steps from 3 and more in "dual stack times" (ABAP + Java) to in average to one now. As we learned that especially write operations to hard discs are in average close to two times more power demanding then cpu and memory usage that already gives the major part. Additionally, there is no further need to connect one or two times from ABAP to Java stack at runtime, which results in further savings in resource consumption.

Another very big advantage with PI 7.3 and the Java only deployment option is that it  only brings the software resources with it which are absolutely needed for PI. That reduces installation times, software patching times and restart times of the Java server. The times measured SAP internally on hardware with good read write operations are around two hours for AEX installation and one and a half to two minutes for a restart of Java server. Due to advanced installer and new PI 7.3 ctc scripts (SAP service Marketplace Logon Required) these numbers will be achievable for customers, too. Customers can get the benefits immediately as the product is generally available since 31. May 2011.

What we can learn for other SAP application developments

Process tasks in one step and in one technical stack together as long as possible, as the AEX does the main part. Reducing write operations to hard discs significantly reduces power consumption. Therefore, you shoul limit the amount of time for processing in memory and write only to disc your logs and application date where it is really necessary!

What the future holds

There are many areas where SAP can improve the carbon footprint of its software solutions. On the company level, on demand solutions could help eliminate the need for each customer to deploy and operate their own system and system landscape, and then provide the hardware and training for people to run these applications. Another possibility could be the in-memory approach to reduce read/write operations.

Individuals can also play a role. Many improvement possibilities are already known in people’s respective areas of expertise. Maybe you have own findings and best practices you’d like to share. Feel free to post and discuss!

This publication is part of SAP’s Green IT initiative, which is based on the following three pillars:

  • Energy - reducing energy consumption
  • E-waste –minimizing waste through sustainable sourcing and recycling
  • Dematerialization –substituting high carbon products and activities with low carbon alternatives

SAP is committed to optimizing these three areas in its own IT operation, with its product development organization, as well as offering compelling solutions and services to our customers. If you need more information please refer to www.sap.com/greenit

How to solve JMS reconnection issue between SAP PI and Oracle Fusion Application Server (BEA/WLS)

A permanent issue has been identified in integration scenario’s using JMS based interfaces between SAP PI and the Oracle Service Bus – OSB (formerly known as BEA Aqualogic Service Bus). The issue normally arises when the connection of a JMS receiver-or sender-channel becomes temporary unavailable.

When the instance or server hosting the JMS queue is restarted, the SAP PI JMS communication channel is not able to re-establish the JMS connection. As a result existing or new messages placed on the remote JMS queue(s) are not picked up by SAP PI.

The workaround to solve this problem involves in most cases restarting the communication channels and in some times it requires restarting the entire Java stack of SAP PI. After many unsuccessful attempts trying to solve this problem including but not limited to; researching possible applicable SAP notes, requesting support at both SAP and Oracle organizations, we did finally find the source of the problem on the Oracle side of this scenario. In the next lines I will explain how we solved it.

The Oracle Application Server also called the WLS (Weblogic Server) hosting the JMS queues provides several parameters that can be configured for different purposes for instance; performance, security, availability and persistency.  See figure 1.


Figure 1.

One of those parameters in particular is the “Reconnect Policy” which is applicable for the JMS Connection Factory used by JMS clients (such as SAP PI) responsible for managing the way JMS clients should react in case a connection with the JMS server has to be re-established.

The parameter can be found on the tab “Client” Client under Messaging\Summary of Services\JMS Modules\..

Figure 2.

Figure 3.

As shown in the screenshot above the parameter can be set to either None, Producer or All. By default the value of the parameter is set to “Producer”.

Figure 4.

We changed the value from “Producer” to “None”, save the new settings and restarted the Oracle Application Server to test the new configuration. The results after the change were positive and until now we have not seen the issue anymore.

View Message Payload for Interfaces using WS-RM Standard without involving SAP Process Integration (SAP PI)

View Message Payload for Interfaces using WS-RM Standard without involving   SAP Process Integration (SAP PI).

Scenario:  SAP systems has interfaces using WS-RM Standard with other systems (SAP or non SAP) without involving the SAP Process integration as middleware like consuming Web service using WSDL Proxy.  

Problem: Same situation I have faced in my project, in this type of interfaces, monitoring is quit difficult as there is no middleware and also difficult to view the XML message payload.

Solution: This blog should help you to check the message payload in this type of scenario and help in error handling. Blog will provide the step by step process to view xml message payload using debug mode.

Blog contain two parts

1: View Request message

2: View Response message in case of Synchronous communication.

1         View Request message

1.1          Put the break point at following point

Class: CL_SOAP_MESSAGE_NEW

Method: IF_SOAP_MESSAGE_PART~SERIALIZE

Line No: 172 xmlout = l_xml_writer_class->get_output( ).

Request_Break_Point

1.2          Run the interface, Debugger screen will open and process will pause at above break point. Double click on the variable xmlout to view run time data. Press F6

Request_Debug

1.3          Open the variable XMLOUT in detail section with double click and view as XML Browser 

Request_Message

XML Request Payload can be viewed.

2: View Response message in case of Synchronous communication.

2.1          Put the break point at following point

Class: CL_SOAP_MESSAGE_NEW

Method: DESERIALIZE_BODY

Line 24: Starting of Method

Response_Break_Point

2.2          Run the interface, Debugger screen will open and process will pause at above break point.  In variable tab put Variable: m_xml_reader

Response_Debug

2.3          Open Variable in detail section and click attribute M_INPUT.

Response_attribute

2.4          View attribute M_INPUT as XML Browser 

Response_Message

XML Response Payload can be viewed.

SAP Developer Network SAP Weblogs: SAP Process Integration (PI)