Trimble BIM ambassador Leif Granholm offers his vision of an open-standard framework for 3D modelling data. He says we need to think about data flows not just data storage.
I’m a strong advocate for open standards, both within Trimble and in the broader market. I’ve also participated in the development of these standards together with buildingSMART, OGC, ISO, CEN and others.
I see open standards as the biggest enabler of a true market economy in the BIM world. That said, the actual role of open standards is still unclear in the general BIM market. I believe this is partly due to the way we approach data.
From paper to digital
The traditional way of designing IT systems is from readable documents. You look at how things are done on paper and then you digitise that process. But when we talk about model-based information, we’re embracing digital processes right from the start.
‘Until now, the BIM industry has been very much focused on the modelling of human-readable documents. That’s why 80% of software is for production and about 20% is for consumption’
The distinction is important. When you start with digital – as we’ve done at Trimble – then you can leave out many of the unnecessary things from the paper world and do new things that are only possible in the digital world.
This has big implications for the way in which data is presented and consumed.
In the human-readable document world, the content and the presentation of data were 100% bundled and controlled by the creator of the data. De-bundling was not possible.
This is not so in the digital and model-based world, where you get the de-bundled data in a machine-readable format. You always need some software to present the information, with the user having the power to control how it’s presented.
Until now, the BIM industry has been very much focused on the modelling of human-readable documents. That’s why 80% of software is for production and about 20% is for consumption. But we are in the midst of a dramatic transformation, as we are increasingly publishing and sharing machine-readable data.
My vision for BIM is to automate the way this data is received and read by software. This would have big implications for change in the whole industry, but it brings huge opportunities for innovation in the development of new kinds of data-consumption software.
Productivity BIM: a new concept for BIM-based workflows
The original idea behind open BIM – as with the original idea of IFC – is that it would operate as a database schema that everyone would openly populate. But it’s now clear this will not happen in the foreseeable future. There will be no single database of BIM data.
If such an open BIM database is not possible, then what is? Currently, there is no widely accepted public concept that can answer this question.
Hence, I propose a viable concept for a BIM-based workflow that solves most of the problems of traditional BIM workflows, while still delivering on the promises of BIM.
I call this concept: Productivity BIM.
If you ask construction professionals why they use BIM, most will say it’s about sharing information in order to work together more efficiently. If you extend this thinking, then you see that many people are not using BIM for themselves – they are doing so for other people. This creates what I refer to as the BIM burden.
For BIM not to be a burden on the process, we need to focus on how modelling and digital technology help each party to do their job more efficiently. You should be able to model BIM data according to your needs only, and then publish that data for others to use to the extent they need. The collaboration aspect of BIM will come as a by-product.
Software needs to be built to support Productivity BIM. Very seldom does somebody create a Tekla Structures model just for collaboration. The model is usually made for a specific purpose by a given function within the supply chain.
One of the advantages of the Productivity BIM approach is that when you create a model to support a specific task, the data quality is usually very good.
Learning from game industry architecture
I find it useful to think of the object types that comprise a 3D model in four main categories:
1. Functional objects: these represent how the building works and is used.
2. Physical objects: the different components needed to construct the building according to its planned design.
3. Logical objects: these are constraints, such as the necessity of having a straight angle between two walls.
4. Abstract objects: space itself is the most obvious example of an abstract object.
The point to take from this categorisation is that the different tasks and phases of a project need various representations of reality.
‘An architect may represent a slab as a single object, while for an engineer it may be five objects and for the detailer 10,000 or more. The detailers cannot do their job with the single slab, much as the architects cannot do their job with the detailer’s model’
For example, an architect may represent a slab as a single object, while for an engineer it may be five objects and for the detailer 10,000 or more. The detailers cannot do their job with the single slab, much as the architects cannot do their job with the detailer’s model. Each function requires a completely different representation of the same thing. In this way almost any object has several different owners of different aspects of that object.
A general technology that can deal with this complex aspect of representation and ownership is a software architecture from the game industry called the entity component system. It’s now being looked at as a guideline for the development of the next generation of IFC.
In the meantime – as a simple solution that you can also already do with the current IFC framework – I’m proposing a federation- and publishing-based information architecture. What this essentially means is that you separate the data you create from the data you get from others. You keep the data sets in different schemas and never mix them up.
Multi-kernel software architecture
To do this in practice, we need multi-kernel software architecture. With this architecture, you have software code that can deal with data in several schemas at the same time, and these schemas can be merged on the fly based on your use case. As all the project parties are working in parallel, these representations are constantly being updated as the project progresses.
My prediction is that the bulk of new BIM software is going to be built on these kinds of common data environments. This means future competition between BIM vendors is going to be more about the interfaces between the desktop environment and the platform than it will be about the data environment itself.
The industry is clearly already moving in this direction, and I see it as my mission to leverage the role of open standards towards being the routine way to consume information. This would also open the market for new BIM start-ups and smaller vendors, as the playing field would be more or less level.
In the end, it’s all of our customers who would be the winners.
Don’t miss out on BIM and digital construction news: sign up to receive the BIMplus newsletter.