Why do people love drawings?

Many people can’t imagine a world where buildings are built without drawings.

But why? Are drawings a perfect method of communicating and documenting a design? Why do people in the AEC industry love drawings so much? And why do people defend the concept of drawings so strongly?

The purpose of this blog post is to discuss how drawings are far from ideal, the biases people may have to wanting drawings, and to open minds to other possibilities.

Drawings: Far from Ideal

One should not make the mistake that a group of experts ever sat down and made the informed decision that a drawing was the perfect and ultimate method of communicating a design.

Drawings as we know them exist because of constraints and almost everything about how drawings look is a compromise.

The media was constrained to paper, which nowadays are sizes which conform to an ISO standard with sizes based on a surface area of 1m2 and at a ratio of root 2. There are many standards related to drawings, I should know, I’ve read and implemented many of them, but how much value are they really adding?

The restriction of space means details must be crammed in and scaled down. In fact, the ability to communicate a 3D design in a small 2D space is so limited that people have had to come up with almost a whole new language and symbols system.

To show depth, different line thicknesses are needed. To show types and cut lines funny little dashed and / or dotted lines are used. Child-like hatch patterns are needed to demonstrate things like bricks, blocks, and insulation.

When you get into the world of engineering, the symbols become weirder again. Fancy triangles for valves, ying yang symbols for rising and falling pipework, and let’s not get started on reinforcement drawings.

Drawings can be so abstracted from the real world that it takes a lot of education and experience to transform them into a mental picture of what something actually physically looks like. An awful lot of design intent can get lost going from one person thinking in 3D, drawing it in 2D, and then someone mentally translating it back to 3D again.

It is hugely ironic that many 3D authoring tools allow you to model in 3D because that’s the most natural way of thinking, but then encourage you to forget how useful 3D is and squash everything back to 2D again.

Drawing bias

Perhaps the rise of 3D perhaps makes people feel under threat. Many people in the industry have dedicated many hours to learning about and practicing the “art” of drawing creation.

Maybe with the decreasing relevance of this skill set, people feel the need to defend drawings to a greater degree than is necessary.

There is certainly a rose-tinted glasses nostalgic aspect. LinkedIn is littered with people clammaring for the good old days when apparently project drawings and designs were always perfect, and having rooms full of people hunched over drawing boards was the pinnacle of engineering.

There is some merit to the argument that drawings are already familiar to people in the industry, but this quickly becomes a “we do it this way because that’s how it’s always been done” type argument and promotes a closed mind set.

Open minds, not PDFs

Okay, I will admit there are some situations where a “drawing” can be useful. And I’m purposely not giving any specific solutions or alternatives because my aim here is that I’d like people to really think about drawings before defending them.

For example, I’m a big fan of schematics because they can communicate a complex system in a small space, but even schematics are probably best referred to as diagrams, because they are intentionally abstracted from physical reality.

But, if drawings as they are today didn’t exist, and you were tasked with coming with cresting a method of communicating an AEC design, is a 2D paper drawing really what you would come up with?

With all the technology available today, such as databases, 3D graphics engines, tablets, even just the internet, surely there is a better way. No dominant new method has yet emerged, but there’s certainly strong signs of an AEC future that doesn’t involve drawings to anywhere near the same extent as they are currently.

Ask why?

What I’m asking is for people to ask themselves “why?” when it comes to drawings. Why do we have drawings? Why are drawings the way they are? And most importantly, are there any alternatives?

If you can ask yourselves these questions and you’re satisfied with your answer, then that’s fine. But, if like me you aren’t satisfied, if you think things could be done differently, then I believe that is a big step towards changing the AEC industry for the better.

The IFC Diagram

IFC Schema Architecture Overview

What a lovely combination of circles, rectangles, squares, octogans, and an inverted triangle the above diagram is. It sort of reminds me of a kid’s shapes exercise from school.

But of course it’s much more than that, it’s actually a diagram showing how to represent what we do in the AEC industry in a digital way. In this post I’m going dig a bit more into what each part of the diagram means.

IFC is like an onion

It has layers, and once you start to peel those layers back, it might make you cry if you don’t know what you’re doing.

Each layer builds upon the layer below, so I’m actually going to talk about each layer in the opposite order than shown in the diagram. The list below is ordered from most generic layer to the most specific layer:

  1. Resource layer – generic things not specific to AEC
  2. Core layer – the most basic things in AEC
  3. Shared layer – things common to many disciplines
  4. Domain layer – things specific to one discipline

Let’s go through each layer.

Resource layer

IFC schema Resource layer (©buildingSMART International Ltd)

This is where the basic building blocks live, and I mean the absolute basics.

For example:

  • Units definitions
  • Shape / geometry
  • Date and time descriptions
  • Money and currencies
  • Property data types

Almost none of these things are specific to AEC. A lot of these things could apply to almost any other industry in the world.

There are some hints of our industry though. For example, there’s a bunch of structural loading objects which are only really useful for structural engineering.

Core layer

IFC schema Core layer (©buildingSMART International Ltd)

Again, a lot of the things here are still very generic. Here we have basic objects, property sets, and different types of relationships, all of which could apply to any industry.

In this layer you have lots of lovely academic language like:

It captures general constructs, that are basically founded by their different semantic meaning in common understanding of an object model

That’s not to understate it’s importance. Like everyone in AEC knows, every building starts, literally, with a good foundation to build upon. The Resource and Core layers are the digital foundations for the AEC industry.

Shared layer

IFC schema Shared layer (©buildingSMART International Ltd)

This is where things get more down to earth and contains the most common objects in AEC. It’s in this layer you’ll find physical things such as walls, stairs, windows, fasteners, earthworks, and MEP fittings.

There’s also some broad things, like definitions for management and facilties, such as permits, cost schedules, furniture and occupants.

Domain layer

IFC schema Domain layer (©buildingSMART International Ltd)

And finally, the Domain layer is where the discipline specific things live.

As of version 4.3, IFC covers a lot of disciplines. These disciplines are:

  • Architecture
  • Building controls
  • Construction management
  • Electrical
  • HVAC
  • Plumbing
  • Ports and waterways
  • Rail
  • Road
  • Structural
  • Tunnel (Work in Progress)

I’m sure everyone can agree this is a pretty extensive list. This would cover most AEC projects, and for those that it doesn’t cover there are probably groups in buildingSMART working to cover more.

The Shared and Domain layer are the layers that AEC professionals would recognise. In fact, if one was to teach IFC, it may actually make sense to start off at these layers and work backwards. Otherwise, there’s a good chance people will get lost and turned off by the highly abstract nature.

Summary

What we’ve learned here is that the IFC schema is made up of a number of layers, ranging from highly generic to discipline specific. These layers are called the Resource, Core, Shared, and Domain layers.

Between these layers, it’s possible to describe most things you’d ever want to describe in the AEC industry in a digital way.

In praise of geometry

“It’s about the data, not the 3D model!”

That’s what we’ve all been shouting for the last decade to stop people getting distracted by pretty, spinny, whooshy models.

But, I fear that the importance of, or at least the utility of, 3D geometry is being overlooked.

Afterall, geometry is data too!

What is “geometry”?

But first, what do I mean by “geometry” in this context?

I am referring to the defined shape of objects. The numbers and relationships that make up the vertices, edges, faces, and solids that are a digital representation of real world physical objects

Sometimes it is disparagingly referred to as “graphical” data, which I think undersells it’s importance.

For the purposes of this post, I’m going to differentiate geometry from tabular or key-value style property sets, that fit neatly into databases and is often easily extracted from 3D BIM models or other data sources.

Why geometry is important to AEC

Many businesses are trying to emulate big tech by jumping into learning data science, statistical analysis, and AI / machine learning.

These concepts may be great for finance and tech companies who’s life blood is numbers and data, but the construction industry is rooted in real world, solid physicality and that seems to be getting forgotten.

In my experience, a significant amount of our industry’s problems arise from geometry based issues but we don’t have the skills or tools to digitally analyse and create solutions for them.

Usefulness of geometry

I personally have a slight bias working for a contractor where a large proportion of problems come from the shape, size, orientation, and location of objects. You can’t give a steel fixer an Excel spreadsheet or a plumber a graph showing the gradient descent curve of previous pipes installed.

To build anything useful, we need measurements and locations.

We need geometry.

In fact, well structured geometry can reduce the overall amount of data.

Why have a “Door Height” property when that data is actually already held in the door geometry itself?

Why have a “Floor Thickness” property when the actual thickness of the floor is defined by it’s geometry anyway.

Dimensions as property values can be confusing when it’s not abundantly clear which distance is actually being referred to.

Geometry in academia

The basics of 2D CAD are often taught in AEC courses, but when it comes to 3D most people’s experience is just from software with no teaching of the underlying theory. The fundamentals of 3D are rarely taught, the industry jumps straight into highly abstracted parametric objects with no teaching of the different ways that geometry can be described and the reasoning behind it.

Surely there is the case that more people in construction need this understanding so that we can use and exploit the massive amounts of geometry data that we have.

Complexity of geometry

A big challenge lies in understanding geometry data.

The structure of geometry is a difficult subject as there’s often no single correct way of digitally representing a real world object. There’s explicit meshes, boolean operations, and solid modelling with extrusions, sweeps, and revolves. All of which are approximations because we can’t actually model something down to the atomic level.

AEC has the added challenge that tolerances are often much higher than other engineering disciplines, which can lead to bigger discrepancies between the digital representation and the real world.

Some standards from buildingSMART help to add restrictions, e.g. Rebar should be swept disks, but there will always be odd exceptions and inconsistencies.

Geometry data structures

Geometry is not just a simple list of property sets, names, and values. It’s not tabular and can be harder to interrogate.

It often involves a lot of spatial maths, which can be scary even for engineers.

Useful geometry analysis requires more than just throwing data into a machine learning algorithm, many calculations are often required to convert the underlying value into a real world location.

Geometry problems

What sorts of problems can geometry solve?

Many regulations are often based on distances. For example, accessibility requirements or boiler flue distances to openable windows.

When estimating and quantifying, surface areas and lengths of materials are needed. In the BIM world, we rely totally on black box algorithm to do these calculations for us.

Logistics relies on knowing what size objects can fit into certain areas, and movement of objects through spaces.

I’m sure there’s many more problems  out there that could be solved better by simply understanding and effectively analysing the underlying geometry.

It’s worth it

Despite all the complexity, I believe that the construction industry would benefit from an uplift in digital geometry knowledge, not just data.

Yes, it’s a bit scary looking on the face of it, but once you look into the details there’s no black magic happening and it can all be reasonably understood.

Why construction is the hardest engineering

The construction industry is awful isn’t it? We’re unproductive, we’re always late, we run over budget, and don’t always build to the required quality. We’re crap basically…

…so begins many a self flagellating article about construction, before going on to blame a specific organisational issue like “bad contracts”, “poor collaboration”, “risk / interface management”, “lack of BIM” etc.

But I believe that all of these types of articles and opeds miss the fundemental, physical differences inherent in construction that make it one of the hardest engineering disciplines.

We frequently talk ourselves down and put automotive, manufacturing, and aerospace engineering on a pedestal.

However, comparisons to these industries do not always make sense. This is because of fundemental constraints unique to construction, which include dealing with:

  • Greater mass
  • Greater distances
  • Specific locations
  • Greater variability

Greater mass and distances

This is the most overlooked constraint when it comes to engineering in construction. We forget that the things we build are massive. Like, really really big.

Compare the size of the biggest car or plane, or machine to even a modest housing project and you’ll see how small they seem. Then compare them to sky scrapers, bridges, warehouses, railway lines, dams, etc. The products we engineer have greater mass and cover greater distances than anything else in the world.

Remembering that Force = Mass x Acceleration, and Energy = Force x Distance, having a greater Mass and a greater Distance is naturally going to need more energy and more work.

The greater masses means we need more powerful machines to move materials around. The smallest cranes we use are stronger than the biggest Kuka 6-axis robots.

The greater distances means we can’t build in a factory, and the build process itself will always require far greater movement of people and materials.

Specific locations

As obvious as it sounds, buildings are in specific places. Due to the large mass and distances mentioned above we’re constrained to building, or at least assembling, in the final location.

Every new building site is a new factory. With all the human resourcing, organisational setup, and logistical problems that go with it.

Other engineering diciplines can pour huge resources into production lines knowing they will be there for years to come. In construction a factory (building site) needs to be set up in weeks or even days, in less than ideal conditions, and in the knowledge it will only exist for 1 or 2 years at most.

And on top of that, as the construction progresses the factory itself changes. Personel, access routes, and logistics are always in flux.

Greater variability

So there’s the 3 physical constraints of mass, distance, and location. On top that there’s the human constraint of individuality.

With the exception of purely functional structures clients often want or need a unique or individual design with specific requirements.

This means the construction industry is almost constantly prototyping, every building that’s built is the first time its being built. This makes iterating and improving upon previous designs really hard.

Even if it’s the same design it’s in a difference place due to the specific location problem. Ground conditions, access routes, utility connections, logistics etc can all be very different for two identical buildings built right next to each other.

Design for Manufacture and Assembly

The concept of Design for Manufacture and Assembly is to break down what we build into smaller parts. The smaller mass and distances allows them to be built in controlled conditions, reducing the impact of needing to be built in a specific location.

While this is definitely a good direction to go in, it doesn’t reduce the total mass and distances, it still needs to be assembled in a specific location (requiring even larger and more powerful machines than usual) and is still susceptible to high variability between parts and projects.

Easier

Let’s be fair to the other engineering diciplines. In construction we often deal with far more lenient tolerances, it is not precision engineering most of the time.

We also use far simpler and more common materials and components. It’s hardly rocket science…

Summary

We should be aware at least of the fundemental differences, and therefore unique constraints, that construction faces before comparing ourselves negatively to others. Solutions relevant to other industries are not always relevant to us.

So, the next time you read that construction is underperforming, remember that in construction we build bigger, we build in more difficult conditions, and we have to build differently every time.

That’s always an impressive feat of engineering.

Do we really need data in models?

This might seem like an obvious question. Yes, of course we need data attached to objects in models, how can we do BIM otherwise?

In this post we’re going to take the idea of data in models to extremes to see what the future could look like. One extreme being putting all data into a model and the other extreme being putting no data into a model.

Extreme 1 – All data in a model

One of the first ideas with which BIM was sold was that 3D objects in a model are intelligent because they have data attached to them. Whenever we refer to attached data it’s usually understood that it is mostly design data and specification data.

FM / COBie data can often be thrown in, and BS1192 says we should attach health and safety information too. This means a typical model today could have the following types of data:

  • Geometry data
  • Design data
  • Specification data
  • COBie data
  • H&S data

That is already a lot of data to store and manage in a model. But what is the solution if we want to take things to the next level and make our models more intelligent? Let’s take this to the first extreme, could we simply keep attaching all the data we can think of to a model? The following is a list of more data that many would consider useful to be in a model:

  • Fabrication data
  • Installation data
  • Programme data
  • Cost data
  • Quality data
  • Inspection data
  • Procurement data
  • Delivery and tracking data
  • Commissioning data
  • Demolition data
  • Environmental data
  • And much more data…

That would clearly be too much data to attach to a model! Attaching more data to a model is a simple way of making models more intelligent, but it is not scalable enough to serve the whole industry. Model files will end up collapsing under their own weight, not to mention it being impossible to manage so many people contributing to the same model.

Attaching more and more data cannot be the solution to taking models to the next level.

Middle ground – Some data in model, some stored externally

Instead of attaching all data to a model, we could store some data in external systems and link those external systems to a model.

But this then begs the question, which data gets to be attached to a model and which data is linked from external systems? And what criteria could we use to make this decision?

Different people would deem different data to be the most important, based on what they require for their job role. Making the split between attached and linked is going to be very tricky. We’d be elevating one stakeholder’s data over another, which is bound to cause friction.

I’ve taken attaching data to models to an extreme to show that we can’t just keep adding more data forever, at least some data needs to be in external systems. What if we take this idea and push it to the opposite logical extreme?

Extreme 2 – No data in a model

What if we have no data in models, and all data is in external systems?

A 3D model file could be purely for geometry and shape definition, an old style dumb CAD model. Design data could be linked from a different system or file format. Specification data from somwhere else. And all the other types of data could all live in microservices and linked to a model by GUIDs or similar.

This end of the logical extreme feels more scalable than just attaching more data.

It may seem like an old idea, but it’s an approach that is now more feasible than ever due to the rise in web services, APIs, widespread internet connectivity, and simplified cloud infrastructure.

Yes, this is a bit more complex than simply whacking 1,000 properties on every object but it is far more scalable. Because all the data isn’t stored in a single place it becomes simpler for different teams to control and manage their own data without impacting others. All that’s required is for a team to have data that can be connected to and used by others.

The clear intention of buildingSMART is to move IFC towards being API based. BCF-API already exists and has multiple implementations. And there is work towards a CDE-API as well. This trend towards standard APIs clearly supports extreme 2 and having less data in models.

Summary

Attaching all data to a model is difficult to scale, attaching some data to a model means having to decide which data is most important, and linking all data may not be feasible right now.

What does this mean for the future of models and data?

In my opinion, in the short term the industry will continue moving towards extreme 1, attaching more data to models. But over time we’ll start to move back towards extreme 2, having more data stored externally and less data attached to models.

A complete open source IFC development stack

Getting started with programming for IFC has never been easier or cheaper.

I present here a stack of great tools that together allow you to build tools that can read, write, analyse, automate, store in a database, and do whatever you want with IFC data. The sky really is the limit. They are all:

  • Open source
  • Zero cost
  • Can be used in commercial and business environments
  • Cross platform
  • Don’t require elevated admin rights to install or run

The tools are:

  • Version control: GIT
  • Code editor: Visual Studio Code
  • Framework: .NET Core 2.2
  • Language: C#
  • IFC writing / reading toolkit: xBIM
  • Bundler: dotnetwarp

(Note: this is not a detailed, from scratch guide, there may be some degree of additional setup and configuration required depending on your exact system)

I highly recommend against creating your own IFC reader and writer unless you really need to, it is not a task for the faint hearted and you’ll need to absolutely sure that none of the existing solutions fit your requirements.

GIT

GIT provides the source code version control system. This tracks all changes and eventually allows you to upload to private and public code repositories, such as GitHub.

You can download and install the latest version from here: https://git-scm.com/

Visual Studio Code

Your Integrated Development Environment is really important, this is where you’ll spend the majority of your time coding and debugging the tools you write.

The Professional and Enterprise versions of the traditional Visual Studio tools are very expensive, hardcore tools, mainly aimed at full time software developers.

A faster and cheaper approach is to use Visual Studio Code. This is a really popular, lightweight Integrated Development Environment that you can install here: https://code.visualstudio.com/

If you use C# with Visual Studio Code, you should also install the C# tools extension from within Visual Studio Code.

.NET Core 2.2

This is Microsoft’s completely open source and free application framework. It contains all the tools needed to develop cross platform libraries, console apps, clouds app, and more.

It doesn’t contain full user interfaces like Winforms or WPF, but my suggested route if you require a UI is to build a command line app for the IFC logic and wrap it with a UI from another technology, such as Electron.

At the time of writing, the latest xBIM library only supports up to NET Core 2.2 so don’t install a version any higher than that. When you’re reading this, check yourself what the highest version xBIM supports.

To install without needing elevated admin rights use the Powershell script here: https://dotnet.microsoft.com/download/dotnet-core/scripts

C#

C# is an open source and open spec language from Microsoft. It is hugely popular with literally millions of users and a vast ecosystem.

It has a slightly higher learning curve than scripting languages like Python and JavaScript but it is well worth the effort.

It installs with .NET Core listed above, and there are good beginners tutorials here: https://docs.microsoft.com/en-us/dotnet/csharp/tutorials/intro-to-csharp/

xBIM

This is a software library that takes away the difficult parts of reading and writing IFC files and allows you to focus on the business logic you need to implement.

It’s developed by Northumbria University and has contributions from many other people around the world.

See the Github page for installation instructions: https://github.com/xBimTeam

Dotnetwarp

By default, a .NET Core app (with 2.2) will create a .exe and a large number of DLLs and linked libraries. This makes deployment of general desktop apps quite messy.

To get around this, you can use the dotnetwarp tools which bundles eveything into a single exe file for easy deployment and management.

Nuget installation: https://www.nuget.org/packages/dotnet-warp/

Alternatives

If C# and .NET is a bit too heavy then you can swap them for Python and swap xBIM for IfcOpenShell.

If they aren’t heavy enough, IfcOpenShell is also compatible with C++.

Summary

I hope this article is useful for people looking to do development and automation with IFC.

And if you’re already developing with IFC, please comment below to let me know what your stack looks like! What language, IDE, and IFC toolkits do you use?

Stop sharing native models

Everyone wants collaboration and sharing of information, and one of the first things people asked for on projects is for people to share their native models. Whether this be in Revit, Tekla, ArchiCAD or other.

I myself have been guilty of this.

It seems like a good idea at the time because you get the exact data that the other people are working with, you can often merge or link models a little more easily, and can make changes to their models if needed.

Dependency Hell

However, when you look at the big picture sharing and using native models has one very big downside: Dependency Hell! I highly recommend you read the linked Wikipedia page and I’m sure you’ll relate if you’ve ever worked on a project where everyone gets stuck on the same version of Revit.

Dependency Hell is enough of a problem that you should seriously consider whether sharing native models actually solves real problems and whether it’s worth it when you take into account all the downsides.

Connected, not coupled

We want people on a project to work together, to share information, and to be connected, but we don’t want them to be dependent on each other.

The tools that any particular stakeholder uses should have no to little effect on what anyone else uses. The tools shouldn’t matter to anyone except the people using them, it’s the output that is important. Unfortunately by sharing native models and specifying software we make tools matter.

By sharing native models we couple and tightly bound people together in such a way as to restrict the choices they can make. One party can’t use a new version or tool unless everyone changes too, which is often a very difficult and expensive process.

Use the best tool for the job

The over riding priority for any IT systems or software setup should be to allow people to use the best tool for the job they need to do.

This not only improves efficiency, but often makes for better employee well being. Nothing sucks more than knowing there’s a better tool out there but not being able to use it.

By mandating a certain tool or compatibility with a proprietary format people have to use what is specified, not what might be best. This is not good.

Increase competition

If you specify a particular piece of software you’re cutting out businesses that don’t use it and reducing your options.

As a double whammy, you’re also encouraging all businesses to use the same tools, potentially creating monopolies that inevitably stagnate and stop innovating.

Focus on IFC

Some projects specify many different types of formats, IFCs, native, and even 3D PDFs. Apart from the information coordination nightmare this creates, it means people don’t have a focus for their workflows.

Some people will create workflows for PDF, some for native, and some for IFC. This represents duplicated effort. If you only specify a free, platform-agnostic format like IFC, then everyone focuses around that format and the chances of people being able to share knowledge and share workflows increases significantly. This is an important way to add value to a project.

Summary

In summary, construction project teams should absolutely work together, but they need to be more aware of the big picture problems that come by aligning so tightly they are no longer free to make decisions that would ultimately be in a project’s best interest.

Getting Valid Input Data

As the construction industry becomes more data driven, its often the little things that break data integration processes. For example, you could try to set up a process that links data from a model to a construction sequence, the only way you could automatically do that would be by having some form of common value so one system knows what things to match from the other.

However, in both the modelling and sequencing software the fields will often be free manual text input. The nature of these two seperate disciplines often means that any discrepancies won’t be found until days or weeks after the incorrect values have been inputted. And the longer time has gone the more difficult it is to correct.

The nature of construction and designs means that there are a lot humans involved in designing, engineering, and creating things which need to be named and described in ways that will never be 100% predictable.

This creates little painful errors such as “0” (zero) being typed as “O” (letter). Spacing and seperation characters being “/” or “-” or “_” which are difficult to spot and can completely break even the most well put together digital systems.

Approaches

There are a number of ways to increase the likelyhood of correctly inputted and aligned data. In order of effectiveness:

  1. Restricting input to predefined values
  2. Not allowing input if the input does not pass validation rules
  3. Displaying warnings if input does not pass validation rules
  4. Performing automated validation
  5. Performing manual validation

The key to all of these is informing the person that is inputting data that there is a problem as quickly as possible.

The human factor

Think about whenever you’ve had to fill in a form online. We much prefer it when a field instantly highlights we’ve done something wrong, as opposed to filling everything in, clicking submit, and then it telling us we’ve typed something wrong or that a username is already taken.

I think this boils down to humans simply not liking going back to things they believe they’ve already finished. We like to complete a certain task and then move onto something new. To counteract this, we need to inform people of problems as fast as possible.

Implementation

In reality, the trouble with approaches 1, 2, and 3 is that it relies on individual tools to allow this level of input control, which is difficult to implement. Most input fields in BIM tools are simply free entry, and creating you’re own validation system for every tool is not feasibile.

Of course, if you are in a position to create your own apps or do bespoke development, then absolutely you should try 1, 2, and 3.

Number 5, manual validation, is often performed by tools such as Solibri or Excel. The major problem with these tools is the manual and time consuming nature of performing the validation steps. It’s often done by specialists, and by the time a model arrives for validation and has beem validated and had a report created for it, the original modeller or designer may have moved onto different areas and will be reluctant to go back and fix little data errors.

This is where number 4 comes in. The most scalable solution for getting better data is automated validation. As soon as a model is uploaded to a CDE there should be systems in place to download and validate this model, reporting back in near real time. The person responsible for that data should be informed as fast as possible.

This process can generate data quality dashboards, which are essential to shine light on how well aligned your systems and data are.

Blockers

The biggest blocker I see at the moment is the lack of an open, standard, and accepted data validation language for construction. MVDs go part of the way there, but aren’t granular enough to get the sort of validation we require.

We need both semantic structure validation rules (e.g. All doors should be in a wall) and allowable values rules (e.g. Levels named to the project standard). As another level, even automated design validation would be useful (e.g. No pipes more than 12m in length).

Proprietary systems like Solibri Model Checker aim to fill this gap, but cannot be a long term solution.

What the solution to this problem would like exactly, I don’t know, but I believe the problem of inconsistent and bad data is big enough to warrant thinking about.

Why aren’t Gantt charts dead yet?

Ubiquitous in construction for generations, yet no Gantt chart has ever been correct. Ever. So why do we still use them?

World War Gantt

The actual inventor of what we know as Gantt charts is difficult to determine because they have developed over time but Henry Gantt (1861-1916) was the first to bring them into popularity and hence are named after him.

They were used extensively in the First World War by the US, which is hardly an endeavour famous for its organisational and management efficiency.

Despite this they are seen as the default tool for planning construction activities. Its use is never questioned, but are we really going to say that in over 100 years the Gantt chart is still the best system of planning that we have?

Now, that is a scary thought…

Future fantasy

At best a Gantt chart is a work of fiction in so far as it is about imaginary events that have not happened (yet). At worst it is a lie because who ever is making the chart is well aware that events will not unfold exactly as they have described.

They are Minority Report-esque in their attempt to give precise dates and durations to tasks that are weeks, months, and even years in the future.

If we were to call them “Predictions of the Future” I imagine many would have less faith in them.

The future is a fog where the further ahead you look the more difficult it is to know what is there, but Gantt charts have no mechanism to show this. They make it seem like it is possible to plan a task that is due in a years time with as much accuracy as one that is due tomorrow.

Despite the fact no one can predict the future, a Gantt chart gives the illusion of predictability, control, and certainty. Perhaps the comfort that this illusion provides is what has sustained the usage of Gantt charts throughout the decades.

When the future is wrong

And what happens when leadership finds out that (surprise surprise) the future doesn’t turn out the way it was predicted and (dun dun duh!) the project is “behind schedule” (gasps and groans in equal measure)?

The project programme becomes a stick, a big deadly whacking stick.

A death march to the deadline begins. No celebrations are allowed. Hours get increased, costs spiral, the effect on everyone’s mental health is horrible. People get burned out and leave.

And this is just the accepted norm in construction.

In my opinion, Gantt Charts have a major role to play in this negativity.

Unpredictable

The construction industry hates unpredictability but our solution so far has been to create ever more detailed plans in an effort to get a hold of the greased pig that is building stuff.

Even with modern methods of construction and off site manufacturing, any construction project of significant size is inherently chaotic and unpredictable. But we refuse to acknowledge this and believe we can bend reality to fit our planning methodology.

I’ve even seen such daft uses for Gantt Charts as software implementation and digital transformations, as if either of those things actually have fix dates, fixed duration, and fixed scope.

So, what’s an alternative?

The first step is to just get people thinking that perhaps we can do better than basing our entire system of planning around a century old 2D chart format.

Next is to admit we can’t predict the future. The best we can do is to estimate, and the further into the future things are the less accurate we will be.

Then we can start to look and other ways of managing dates and dependancies that are agile enough to prevent change from destroying projects and people. They do exist in the form of Agile and Scrum methodologies that have matured in the software development world, we just need to open our mind to them.

Finally, stop focusing on the deadline, instead focus on productivity. Work on improving teams’ efficiency by making the work transparent, having weekly lessons learned rather than one every few months which gets forgotten about, and iterate on your processes frequently and gain feedback as soon as you can.

By improving the team you are much more likely to hit the deadlines in the first place.

 

Business benefits of openBIM

Communicating the business benefits of openBIM is difficult.

We’re up against armies of marketers from large corporations with multi-million pound budgets to lobby and appeal to the major industry players, as well as to spread Fear, Uncertainty, and Doubt about alternatives that could hurt their bottom line.

Unfortunately believing and following the big monopolistic companies is often the default safe bet for IT and digital departments, after all “Nobody gets fired for buying IBM“.

This post is my attempt to summarise the business benefits of adopting openBIM as a construction business’s underlying digital strategy. The benefits are split into the following three areas.

  1. Flexibility, choosing from many options
  2. Agility, being able to change
  3. Stability, keeping consistency

Flexibility

Sticking to solutions from just one company gives you a small box to work within, there are generally limited options and you will end up changing your process to fit the tool rather than the other way around.

Adopting openBIM doesn’t mean you can’t continue using your existing favourite software but it does increase your available options, and it allows you to always choose the best tool for the job.

If software suppliers know that you do not have any flexibility or little to no options, it gives them tremendous power to dictate cost and quality to you.

By adopting openBIM formats you are showing them that you have a wide choice of suppliers and they will have to work hard to get your money, driving competition and innovation.

Agility

Once you decide to use a single company’s platform or tool you are effectively locked in.

And you can do all the due diligence in the world and be 100% certain they have the best tools for the job at the time of deciding, but what happens if in 12 months time things aren’t as good as they first appeared and competitors with better solutions start to enter the market? Bad luck, you’re stuck with the initial decision and their proprietary solutions you can’t easily escape from. The cost of change would be very significant.

(My favourite example of this is the cutting edge UK Navy aircraft carrier powered by Windows XP. That project clearly had no room for agility and to change as new developments occured.)

However, if you adopt openBIM as your underlying strategy the cost of change is vastly decreased. All your data will be highly compatible with IFC so you will be able to easily open your existing data in new systems with minimal migration costs.

openBIM also reduces dependancies. You can upgrade your software to take advantages of new features without having to wait for others to also upgrade theirs (if they ever do).

The costs will never be zero because even little changes always cost money for businesses, but the cost will be reduced and the ability to react to technological advances can a key competitive advantage.

Stability

In this age of rapid technological change it would be a fool who tried to predict or plan how we’re going to be working in just a couple of years time.

So how does a company create a medium or long term digital strategy needed for investment decisions?

The answer is to not standardise around something as temporary and ephemeral as a tool or a platform, but to standardise around data structures. And openBIM provides the best data structures to do this with.

To be clear, we want tools and platforms to rapidly evolve and to help improve our efficiency, but the underlying data should be far more stable.

If the fundemental underlying data structures keep changing, say by changing file formats every year, then that can cause huge legacy and compatibility difficulties. For example, if you build a tool around a proprietary piece of software it can become a sunk cost which works against change. If your tools are based on underlying standard data they’ll have a greater life span and Return On Investment.

openBIM provides stable and comprehensive data structures you can base your construction data on.

With openBIM you can chop and change and overhaul your tools and processes frequently but maintain a stable foundation of a standard data structure.

Summary

If this blog post is too long for an elevator pitch to the CEO, here’s my attempt to boil the business benefits of openBIM down to a single snappy sentence:

Adopting openBIM gives a business the flexibility to choose the best tool for the job, the agility to change tools, and the stability to make long term decisions.