Technology

Tilbury Douglas tests low-cost apps for temporary works

Tilbury Douglas image
Ben Bennett, principal engineer and head of temporary works at Tilbury Douglas, tells Denise Chevin how the contractor is using inexpensive mobile apps to aid safe demolition of a settlement tank for Thames Water.

Photogrammetry – the process of taking many photographs of an object from a variety of angles and stitching them together using algorithms to create a 3D model – has been making waves in construction. But the kit typically needed to capture such imagery, such as specialist cameras, or drones with LiDAR (3D laser scanning) for example, can be expensive and needs experts to set up and operate.

Bennett is trialling inexpensive mobile apps used in the animation industry to do a similar job, while producing centimetre-level accuracy (as opposed to millimetre accuracy you might get with more expensive equipment), which is often adequate for communicating scenarios on site.

Here, he talks through the technology, resulting benefits and how he is currently trialling the apps to help in planning the safe demolition of a large primary settlement tank for Thames Water.

BIMplus: Which apps are you trialling?

Ben Bennett: Tilbury Douglas is trialling two apps – Polycam and Metascan – on the refurbishment of a water tank for Thames Water. Both are available as smartphone apps on Android and Apple devices.

How did you come up with the idea?
Ben Bennett, principal engineer and head of temporary works at Tilbury Douglas

“I wondered if these apps could be used to scan areas on site in need of a design solution, to help in communicating the geometry and constraints to designers.”

Ben Bennett

I follow some creators in the special effects industry, and subsequently started seeing them using these applications to drastically speed up the process of creating 3D models for special effects purposes.

These applications are relatively inexpensive, so I started testing them myself and saw potential for their use on construction sites, where it can often be difficult to communicate scenarios to a designer who is far away without a site visit or model.

I wondered if we could use these apps to scan areas on site in need of a design solution, to help in communicating the geometry and constraints to designers, potentially better than photos alone can do. And it allows designers to take their own measurements, which are accurate enough for many designs. 

A site visit would still be the ultimate way of understanding a scenario (for now) and this would not necessarily rule out the need for one if the situation is complex. However, I saw this as potentially offering a step up from the typical photos and videos we receive from site. And then I realised the wider implications these may one day have, in recording as-built information in 3D to eventually feed into our BIM models and digital twins.

Can you give us more details about the Thames Water project?

Our Thames Water framework requires us to push for innovation. Thames Water is investing in improving its existing infrastructure, which includes upgrading its current water treatment and waste treatment facilities, by way of installing new treatment and storage tanks, for example. It is just one of the frameworks and schemes we are trialling this technology on.

From a temporary works point of view, I made the most of site visits in the early stages to test these applications. I then used these models alongside photographs in discussion with temporary works designers to communicate the works required.

Having a 3D model really helps to make a scenario more intuitive, and with tools like SketchUp you can quickly mark up and add illustrations to the model in 3D, making the entire process much more efficient and simpler.

What benefits are coming through in the trials?

We are still in the early stages of testing these apps and seeing what potential they truly have, which is substantial.

The main benefit currently is that we can have someone a few hours’ away use one of these apps to create a quick 3D representation of a situation on site that needs a design. The accuracy of the model is often good enough to prepare some initial calculations and to help explain where more accurate measurements might be necessary on site.

The apps also take advantage of LiDAR scanning if the hardware has the ability to do so, further improving the accuracy of the models and capturing even finer details.

Screenshots from the Polycam app
Is one software proving to be better than the rest – and if so in what way?

We have tested a number of applications so far, such as Metascan and Polycam, many of which offer very similar solutions. Polycam has been slightly more user-friendly and intuitive so far, but there are likely other apps out there still to trial. We are aware of new technologies, such as “neural radiance fields” (or NeRFs). These use AI and neural networks to understand what is being recorded, enhancing the results.

Crucially for our goals as a business, we are not necessarily after the “best” app, but apps and tools that give us the most value and work the most seamlessly with the technologies and ecosystems we have or are developing. This means they must be able to offer easy collaboration with other applications and be simple to incorporate, which currently Polycam is doing well.

What sort of learning among the team do you need to harness this technology?

“One of our core values is to make technology accessible to all, and so having applications that people can easily pick up and work with is a key driver for us.”

Ben Bennett

One of our core values is to make technology accessible to all, and so having applications that people can easily pick up and work with is a key driver for us, and is necessary, we believe, to achieve our digital roadmap.

These applications tend to be very simple and intuitive to use and do a lot of the work for you. For instance, in Polycam, you can use the automatic picture mode and it will then decide when to take photos for you based on its recognition of what is being scanned, while you move around the object or area.

The training itself can be provided via Teams in a matter of minutes. Also there are plenty of excellent tutorials available freely on YouTube that explain how to get the most out of these applications, as there is still a technique to getting the best results from these applications.

When might you use it on other projects?

Currently, when a design requirement comes up on one of our sites, I will invite sites to trial the use of one of the mobile photogrammetry applications to help communicate the scenario and will continue to increase the awareness of these around the business.

I am also looking to capture design items I see out on sites that I think may be applicable elsewhere, as this offers an improved way of explaining possible solutions to others in the future and can be fed into our Configurator tool [explained below], when planning schemes at the pre-construction stage in 3D.

Are there any other technologies you are trialling/using in conjunction with this or more generally in your line of work?

Our digital team is actively investing and looking into the use of AR/VR technologies on sites and the possibilities these may present in terms of improving many aspects of the work we deliver.

Tilbury Douglas, in partnership with Unity, recently launched its Connect Configurator, which seeks to accelerate the design process by enabling clients to quickly develop their own fully detailed BIM models.

There are also some exciting developments in the realm of photogrammetry and AI that, in simple terms, involves the application understanding what it is it’s seeing and applying that contextual understanding to improve results, which we are incorporating into the Connect platform with Connect Vision.

We see the huge potential these apps and technologies have to offer for the future of construction. For example, we see these apps working with our Connect Configurator and BIM models to record and communicate 3D as-built information during the construction phase.

Use of AR

Furthermore, imagine being able to scan and record rebar arrangements before pouring concrete. Designers could use this to verify what has been installed and then this would all form part of the final deliverable and digital twin. This would offer the ability to go back later and, if needed, see through the concrete in AR.

In the not-so-distant future, we will be able to walk around and see through the structure of our buildings and infrastructure with AR in real-time, revealing where services are installed, or hidden structural details exist.

We see our Connect Configurator allowing us to lay out the road map of a project and then tools like this will allow us to compare reality to that map.

Following on from our work with Unity, which is perhaps better known as a video game engine, we also see a future where physics engines advance to the point where they can be used to assess structures in real-time to rapidly identify design faults in existing structures and make them far less likely in future ones.

Don’t miss out on BIM and digital construction news: sign up to receive the BIMplus newsletter.

Story for BIM+? Get in touch via email: [email protected]

Latest articles in Technology