Asset-sharing can learn from the Olympic relay athletes, says Barry Hall.
This year, the Olympics event I’ve missed the most is the relay. What a compelling challenge to watch: each athlete has their own race, but they can’t reach a podium unless their teammates also succeed in theirs. They can set their colleagues up for success with a clean start, explosive energy and an expertly timed handover, but those same elements when poorly performed bring the whole team to failure.
The asset lifecycle is never going to attract an arena of spectators and – despite playing our role in megastructures – we won’t earn gold medals. But there are definite parallels. In asset management, we’re a team working towards one goal, constantly improving infrastructure to be resilient, sustainable, cost-effective and serving its users perfectly. And, although we have our own team around us at each stage, as asset managers we’re not necessarily acting simultaneously. Our relay partners lie ahead or behind us at concept, design, construction, operation or even end-of-use stages.
To make this chain of asset and information management successful, data needs to be well structured from the get-go. On projects, much of the relevant data is produced during design and construction phases. But once the designs are completed and the construction crew has cut the ribbon on the front door, what are most asset managers left with? Too often data is simply lost, either by not being transferred at all or by being transferred in an oblique manner with data zipped into nested files or missing associated metadata.
In short, it tends to be incomplete, inaccessible and unsuitable for asset managers’ data needs.
Although we have our own team around us at each stage, as asset managers we’re not necessarily acting simultaneously. Our relay partners lie ahead or behind us at concept, design, construction, operation or even end-of-use stages.– Barry Hall, Atkins
A structure within a data environment relates to how information is formed and grouped when being prepared for the data owner, and of course in the all-important delivery of that data. After all, many a well-laid plan has run up against file-size blockers or incompatible software that has ruined the carefully prepared structure.
Structure defines how a data table is built by format (data type, units and size) for each record. A good structure defines a consistent and easily understood approach to how records are stored about things. Badly or inconsistently structured data results in errors when reporting and analysing the information. This could be something as simple as having column names with confusing double meanings, or values being written as text in some instances and numerical in others.
Having a well-structured repository to hold the data is only a part of the solution: the content entered within the structure must also meet an agreed level of quality and consistency.
Assigning information to an asset should be carried out in a manner to benefit future users of the information. Common pitfalls include putting consistency in jeopardy by using “free” text fields in place of drop-downs, or either duplicating data fields redundantly between two datasets covering different aspects of the same asset or (much worse) removing common fields that should link multiple datasets to each other, pretty much guaranteeing that the complete picture can never be analysed at once.
Without consistency in content, we cannot create resilient automated solutions to data-driven problems.
And this is a key requirement to facilitate accurate and reliable results in both the population and analysis of the data within asset management. Automation is a wonderful thing because the same results can be produced time and time again without requiring an in-depth knowledge of data management practices or software.
Passing the baton
For anyone who has ever used data of the development lifecycle, it will be blindingly obvious that one user’s data requirements will not be the same as others. To maximise the benefits of analysis and automation of functions, when passing on this data baton, there is a need to maintain or improve the value of the information held. If the baton is mishandled or dropped, then the consequences, unless quickly repaired, will remain for the duration. But with each team using the data to meet their data goals – the data requirements for a concept designer are pretty different to those of the maintenance team – it can be difficult to maintain.
Cutting the noise
Selecting what you do not need is always more difficult than deciding what to keep, but it’s just as important. A big risk to data quality comes from the inclusion of information that is redundant to the forward users. This is a ‘just because you can doesn’t mean you should’ issue: challenge yourself on why each piece of data is required. Bulky data detracts from important attributes, so keeping it lean will make sure your data heirs aren’t left hunting for a needle in a haystack.
Reaping the rewards
Get it right and data provides myriad opportunities for enhanced asset management. And although the most successful asset managers won’t see themselves on a literal podium anytime soon, what we will see is information on our assets. Real buildings, structures, utilities, and transport networks with real users, real maintenance budgets, real retrofit needs and real carbon footprints. They deserve our best and if we work together as an industry to run our relay, we can deliver their best.
Barry Hall is principal GIS consultant at Atkins.
Image: 87926564 © Wavebreakmedia Ltd | Dreamstime.com