Around Europe and Asia, there are a number of countries that have already successfully moved from a three-day ACH clearing model to accommodate immediate payment. Vocalink, in the United Kingdom, has exported its faster payments architecture, and few would bet against further expansion of that business following their acquisition by MasterCard. In many cases, implementations of immediate payment models are achieved using a convergence strategy, where account-to-account payments are processed on the same proven infrastructure as card transactions.
This convergence approach is possible because of the similarity between the processing of these transactions. It is desirable because it eliminates duplication (and therefore cost) and supports the omnichannel experience which is preferred by consumers. As recently as this month, the UK's Bank Of England announced it would combine its multiple same-day and high-value payment systems to process on the same infrastructure to make its operations more efficient.
While convergence is clearly good for consumers, payments providers and regulators, it introduces new challenges for those responsible for testing the systems to ensure their availability and throughput. The resulting converged infrastructure contains many varied participants with different requirements and priorities. The expansion of participants also increases the number of types of acquisition points for transactions, including some that are not owned by the providers (e.g., mobile phones).
These new players, new end-points and new devices often also rely on 3rd party services such as wallets or tokenisation servers. The combined effect of the expansion of the payments ecosystem to include all of these new components is an increase in complexity that wasn't envisaged 10 years ago. Yet, organisations continue to assess and test these systems using the same decades-old approaches utilised 10 years when systems were much simpler.
When a transaction relies on a cloud-based service during the processing of a transaction, for example to return an exchange rate, then the testing complexity increases further. A test harness must be able to cope with a range of different technologies and communications models in the same transaction as is the case with mobile apps, browser sessions, ISO messages and service requests.
When a transaction involves so much complexity, it is typically beyond human ability to manually synchronise these multiple components while creating error conditions to cover a full range of production conditions. Successful testing for these variables requires a level of programmatic or scripted control. This is more than automation - it is about coordination and synchronisation of test data, test conditions, component simulations and the system under test.
Once such complex test harnesses are developed, organisations rapidly become dependent upon them to achieve their time-to-market objectives while maintaining their quality standards. For this reason, it is critical that organisations plan for the maintainability of their test platforms. Platforms based on "record and replay" are typically "brittle" and require extensive re-work when the system under test is modified. Script and rule-based systems are more manageable over the lifetime of the system under test.
The complexity created by the continued evolution of payment channels and types is unlikely to abate. To address this pace of change, organisations will look for ways to improve their economies of scale while addressing the expectations of clients and customers. Leading payment providers already have started deploying the next generation testing strategies needed to ensure that these vital systems can support the increasing demand generated by consumers and businesses.