Knowledge Integration Dynamics

Look before you leap


By Mervyn Mooi, Director at Knowledge Integration Dynamics. (KID)
[Johannesburg, 25 Sep 2012]

The fact that more business information is moving into the cloud or that the cloud is being used to store an increasing share of business data is not news. Nor is it news that one of the biggest challenges is ensuring the good quality of that information in line with enterprise norms.

One of the challenges that still remains, however, is ensuring good quality information when it resides in off-premise, cloud-based systems.

"The SLAs have never focused on maintenance or watchful and responsible care."

The hurdle has always been that organisations storing information in the cloud have been subject to service level agreements (SLAs) with their service providers that focus on access availability, speed of delivery, data recovery and security. The SLAs have never focused on maintenance or watchful and responsible care in accordance with enterprise processes, procedures and practices. The result is that the information can never be fully trusted.

Identical rules

Integration and quality concerns that underscore the difference between trusted or not, require both cloud and on-premise information and content be subject to the same standards. They must endure the same rigours, the same exchange protocols, integration and quality processes, domain-respective business rules and so on, to ensure all information is uniformly managed in a standardised manner.

That may be achieved by mapping data between on-premise systems and those in the cloud, and if so, must be done through a standard, common set of logic and rules that are implemented to govern the information and content. That architecture will result in a compromise in processing and storage performance, which is inevitable in any type of exchange, and should not constrain the design to the extent that management of unstructured cloud information and content is largely ignored. Another approach is to exchange only the information or content that is required by applications and users for queries or reports. It keeps network traffic and storage requirements to a minimum. Virtualisation and federation technologies can be exploited, because they do not physically move all the information or content from their place of origin, but rather they reference them and only the requisite bits are actually copied. That offers another enormous advantage: the information and content are left intact at source and managed by the local standards and security.

Downhill from here

Failure to resolve this issue once a cloud-based architecture is adopted will exacerbate storage and duplication issues that will inflame lack of trust in business systems. Experience shows that when this occurs, the speed, flexibility, and accuracy of information supply to business users breaks down, with the result that organisations become inflexible, lethargic in the face of rapid market shifts, and spiral into margin depreciation. More companies face the dilemma of which solution they will turn to. In an October 2011 report, IDC VP for storage systems, Richard Villars, stated that companies worldwide spent $3.3 billion on public cloud-based storage in 2010. He projected the compound annual growth rate at 28.9%, which put the global spend at $11.7 billion – last year. By comparison, the total spend in 2010 for on-premise storage was around $30 billion, which puts IDC's forecast for cloud-based storage by 2015 ahead at more than $37 billion. Interestingly, IDC's report projects service providers will increase their spend from $3.8 billion in 2010 to $10.9 billion by 2015.

So, once the cloud is incorporated into enterprise information strategies, regardless of which option companies choose, many more are facing the challenge, as there will always exist a growing need to expand on existing on-premise information management processes and capacity to accommodate external cloud information and content.

comments powered by Disqus