I have heard that APO does not do a good job of handling the high volume of transactions and algorithms that are required in a good advanced planning and optimising solution. Is it true? In your experiences with implementing APO have you uncovered a cache issue?
I have seen the issues that you mention, although from my experience these where in the earlier days of APO and SAP have spent considerable time and resource in rectifying these problems.
With the high volume of transactions, SAP changed their approach from single outbound queues (between R/3 and APO) to multiple inbound and outbound queues. This allows a higher level of transactions to be processed, speeds the transfer and gives a much more robust solution. In my last project we initially had single outbound queues and the queues blocked at least once per day. However, since moving to multiple inbound and outbound queues we have not seen a block since changing.
For the cache issues, I see this as a learning exercise with the sizing of liveCache and the constraints of NT liveCache. As with all new technologies it takes the actual use of the software to fully understand how it reacts and works and liveCache is no different. SAP have modified the sizing programs to suit what they have learnt and the results of sizing now seem to be more reflective of the go-live systems. With the NT liveCache there was a physical limitation of 2Gb from NT and 3Gb from NT Enterprise and this resulted in some customers running out of disc space. This has been resolved with the release of Unix liveCache and most customers with large liveCache requirements now work with Unix rather than NT.
Dig Deeper on SAP SCM
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.