Quality and Quantity: Building a Database for Cost Benchmarking
In 2019, decision-makers in construction estimating recognise the potential of historical data for guiding future developments. While the benefits are clear for projects large and small, it’s not always easy to gain actionable insights from the data at hand.
Without the right operational processes across your entire team, you might be setting yourself up for a whole lot of frustration in your rush to get ahead. Our latest blog post looks at best practice for harnessing your historical data.
The Common Challenges
Data is power in modern construction, and it’s quite common for businesses to try and aggregate every piece of operational data for future use. Whilst this works in theory, depending on the function of your business, the practical application can be fraught with challenges.
On complex projects, there can be a huge volume of varying cost information; some created for internal use rather than external, some pertaining to different project stages and so on. Even if you have a repository of fully-formed construction cost estimates available, there are likely to be inconsistencies wrought by the nature of each individual project and who the estimates have been prepared for.
Not every project follows the same workflows or work breakdown structures, and any differences will just make things more difficult when it comes time to produce a cost benchmark. In the modern landscape, it simply isn’t practical to adjust these inconsistencies manually.
It’s also worth mentioning that the average estimator often faces very challenging deadlines, where normal tactics for developing estimates can naturally go out of the window in the rush to get the job done. For this reason, it’s vital that data validation processes are swift, uniform and easy-to-follow even in the face of stringent time constraints.
There is no room for compromise with construction cost benchmarking. Software is now available that allows users to perform benchmarking and generate conceptual estimates with ease, while safe in the knowledge that only sound historical data is being utilised.
Exactal’s Benchmarking Solution
CostX® Benchmark allows users to build a cloud-based repository of high-quality, sanitised data that has been checked and validated. Our platform utilises a separate conversion and upload process after estimate completion that includes both programmatic and user-verification steps. This ensures that normal estimating processes are not delayed or compromised, whilst still automating the creation of a high-quality database of historical cost information.
The CostX® Benchmark workflow allows for efficient storage of data from complex projects, as the multitude of different estimates produced for a single project won’t all be uploaded into the historical dataset. This guarantees that duplication is avoided, and only relevant information forms the database.
Our benchmarking and conceptual estimating solution is built for efficiency and flexibility. Estimates produced can be broken down across multiple levels, with a variety of cost breakdown structures supported. Users can adjust older data through index factors, perform functional analysis and comparison for specific project types, identify comparable projects through standard or custom attributes and much more. It’s even possible to easily exclude inapplicable estimate rows or values from historical estimates, ensuring that calculations are as relevant as possible.
CostX® Benchmark integrates seamlessly with Exactal’s flagship CostX® estimating software. To learn more about CostX® Benchmark or to determine its feasibility for your enterprise, feel free to contact your nearest Exactal office today.