I’m on holiday this week, relaxing in the south of Spain with some lovely warm temperatures, which has been a boon compared to the summer we had in the UK this year. Part of my holiday reading has been Isaac Asimov and I’m ploughing through his short stories, currently one called “The Feeling of Power”.
The story discusses the re-discovery of basic mathematics, computed using paper rather than a wholescale dependency on computers. The analogy to the storage industry wasn’t lost on me; we developed techniques in the 70’s and 80’s for managing mainframe storage, which appear to a certain degree to have been lost in the move to resource aplenty and the Windows/Unix age.
Like the Asimov story (which focuses on two warring nations and their tit-for-tat technology advances), the storage fight is against the continual growth in demand for storing information versus new technologies which improve our capabilities to store more data.
I wonder whether we should go back to original principles for data storage – and what were the 80’s “golden age” methodologies which get referred to so many times? Well, firstly we have to accept that it was a different time then. The volume of data was nowhere near the levels we have today. However there was a focus on cost – as there is today. My experience of mainframe storage revolved around the following:
- Standards. I worked at a site recently where the Storage Architect didn’t believe in setting sensible provisioning standards, being happy to rely on the ability of software to handle multi-pathing numbering settings. Whilst this is technically possible, from a practical standpoint, standards need to be adhered to. It’s common sense really. When you’re diagnosing problems, looking at the loading and balancing of a system or planning the scalability of a system, standards are essential.
- Process. This is one of the key pieces of mainframe storage management. In the past I regularly trawled volumes for un-catalogued datasets (files), scanned catalogues, VVDS and VTOCs for rogue entries and ensured all datasets adhered to standards laid out in the DFSMS configuration. There was a continuous focus on ensuring all datasets on disk were valid and required; DFHSM sucked up unreferenced datasets and moved them to tape and forward to eventual expiration (subject to retention of a backup copy of course).
Of course in the Asimov short story, the ultimate result of re-discovering mathematics was not explained. I’d hope that in the Storage world, we can learn from the past and improve on what we already know.