Chris Miles examines the historic issues surrounding the use of desktop studies and how a new approach can reinstate their usefulness.
Those who have followed the Grenfell Tower Inquiry will know that the use of desktop studies was called into question and much maligned by many who considered such assessments part of the problem. Whilst it is beyond the remit of this article to consider the specific assessments which were used to support the products and systems used in the renovation of Grenfell Tower, it is clear that the Inquiry certainly cast some doubt on the use of such assessments. But with the renewed focus on using appropriately firesafe construction materials, the question over the role desktop studies have to play going forward remains a live one.
What’s in a name?
The term ‘Desktop Study’ is often used in the construction sector to cover a variety of processes which relate to the evaluation of a specific system without any direct testing. It is common to use this term as a convenient reference but does not fully explain the objective nor outcome of the process. The term is also interchanged with ‘Engineering Judgement’ or ‘Technical Evaluation/Assessment’. In the interim report following the Grenfell Tower fire it was recommended that Approved Document B (Fire Safety) should be amended to restrict the use of these studies, then referred to as ‘Assessments In Lieu Of Tests’ (AILOTs).
The process that each of these terms is intending to cover is often used for passive/fire resisting systems, particularly for systems such as firestopping and even fire resisting doorsets that can have multiple design variations. The process is used in the construction sector where it may not be possible to test a proposed system, often for a specific project, and often for a specific design/specification that has not already been tested. Such assessments have been regularly used due to a site-specific variation which has occurred late in the project or where there are small variations which may have minor impact on the basic construction.
However, it is particularly important to be clear that the system being evaluated by the assessment route should be based upon a previously tested system or systems which are close in design to the proposed specification. Therefore, if that is the case, to reference the assessment being ‘in lieu’ of testing is misleading and likely to result in misconceptions about what the assessment actually is and how much weight it carries.
Whilst accepting that it doesn’t roll off the tongue quite so easily, for construction products with fire safety properties the correct terminology should be ‘Technical Assessments of Fire Performance of Construction Products Based on Fire Test Evidence’. This would reinforce the need for and highlight the value of the support from the existing test evidence – but can the assessment be technically justified or is it a short cut to save money, one which also compromises fire safety?
UK government position
The government agreed with the recommendation in the interim report mentioned earlier and launched a consultation in 2018. The proposal was that “the government should significantly restrict the use of ‘desktop studies’ to approve changes to cladding and other systems to ensure that they are only used where appropriate and with sufficient, relevant test evidence”. The consultation went further in proposing that “those undertaking desktop studies must be able to demonstrate suitable competence. The industry should ensure that their use of desktop studies is responsible and in line with this aim”.
Following the consultation, the respective report stated that the government has decided to go further than the interim report recommended and, in effect, ban the use of AILOTs for the external walls of certain high-rise buildings. Given the events leading up to the Grenfell Tower fire this could be seen as a sensible solution. The consultation reported on findings from many parts of the construction sector with the vast majority of respondents agreeing with the recommendation to restrict the use of AILOTs.
The consultation report went on to conclude that, amongst other requirements:
- Assessments should only be carried out where it is clearly impractical or not feasible to carry out tests.
- Assessments should only be carried out when sufficient and relevant test evidence is available to support the assessment.
- A standard for extended application of test evidence should be followed, or if there is no standard, the principles outlined in BS EN 15725:2010 should be followed.
- The test evidence which forms the basis for the assessment should be referenced.
This acknowledges that there is scope for such assessments to be used, but that they should be conducted within limitations and not just to suit the need for a quicker, cheaper solution.
Defining the scope
In 2019, the Passive Fire Protection Forum (PFPF) produced a guide to undertaking these types of Technical Assessments. The guide was developed by individuals from organisations with a history of producing such assessments based on fire resistance tests and was an attempt to add some definition to the process of conducting the assessments. It was so well received that this is now referenced in Appendix B: Performance of materials, products, and structures of Approved Document B (ADB). It states that systems or products “should have been assessed by applying relevant test evidence, in lieu of carrying out a specific test, as being capable of meeting that performance classification” and that “Further information on best practice is provided in the Passive Fire Protection Forum’s Guide to Undertaking Technical Assessments of the Fire Performance of Construction Products Based on Fire Test Evidence”. This reference in ADB is further acknowledgment that there is a need to conduct such assessments but also provides a pointer on how to conduct the evaluations by using the PFPF Guide.
Despite the reference in ADB, the PFPF Guide is not widely known and not exclusively adopted as the basis for these types of assessments. The PFPF Guide defines a number of requirements which should be followed during the process of writing Technical Assessments based on existing fire tests. These include requirements for the competency of the assessor and reviewer, for the organisation producing the assessment having the requisite Professional Indemnity and Quality Management systems, for the use of older test evidence, for what should be included in the report – including strongly referencing supporting test data, and for rules of conduct and ethics for the users. The requirements in the PFPF Guide address all of the points raised as concerns during the government consultation and, if followed, would alleviate the concerns regarding the production of AILOTs.
Why not just test everything?
For many systems which reach the market there are multiple variations possible. These variations are needed for numerous reasons, including aesthetics (e.g. different veneers on doors), the requirements of different types of services in a building (e.g. penetrations through a firestop), and varying structural needs (e.g. steel sizes and design loads). The variations in system specification are so extensive that to require each of them to be tested would be time-consuming and prohibitively expensive. Furthermore, the variations needed are often not known until a late stage in the build process. This may be due to last minute changes in design, different requirements within a building, or many other reasons. It is often then too late to test the specific design and/or the cost of testing has not been allowed for. In an ideal scenario there would be no such last-minute changes, but this is not always possible.
A test is simply a statement of facts recorded at a moment in time on a specific system. The test does not guarantee that the same data will be recorded if the test were repeated a day later. There are numerous variables in any test and the more repeatable tests will reduce these variables to a minimum. The simpler the test and system being tested, the more repeatable it will be. A fire test on a fire resisting system, however, is not simple. They are often extremely complex with numerous variables possible, including from within the system under test. This means there is a degree of uncertainty with any fire resistance test. It follows, therefore, that if there are any changes within the specification of the tested system, there can be no guarantee that the same performance will be achieved. For this reason, test evidence provides information on the specific product or system tested and generally does not offer any extension to scope or design variability.
The role of a Technical Assessment is to extend the scope of the tested design/application of a product or system offered to the market. To do this, the assessment must be based on sufficient, relevant, and directly applicable test evidence. It is not a licence to extrapolate test data to the specification of a system such that it is no longer representative of the tested system. The PFPF Guide details two types of supporting test evidence which may be used; primary and secondary, and also separates assessments into Basic, Intermediate, and Complex, further defining the levels of knowledge that an assessor and reviewer should have to conduct the assessment. By addressing these aspects it is expected that Technical Assessments will result in a considered, technically robust assessment that does not stray too far from the tested system.
A possible conclusion that could be reached is that fire testing of systems will not be required in the future as all systems could be evaluated by a ‘desk-based assessor’, but by using the principles in the PFPF Guide there will always be a need for test evidence on which to base the evaluation.
It should also be remembered that a positive assessment is not always possible. It could easily be the case that the request for change from the tested system will not result in performance anything like that achieved during the supporting test. This is an important aspect to bear in mind for those requesting changes, and for those producing the technical assessment. There must be a right for the assessor to judge the request as extrapolating the tested system too far in which case a new physical test would be needed despite the likely costs and/or time delays.
Can trust be rebuilt?
It is the position of the FPA that, by adopting procedures defined in the PFPF Guide to Undertaking Technical Assessments of the Fire Performance of Construction Products Based on Fire Test Evidence, it can be shown there would be numerous checks in place to ensure an assessment is robust, justifiable, and as accurate as possible regarding expected fire performance of a system. To do this, it is critical that the requirements in the PFPF Guide are followed as a minimum.
We also believe that these assessments can be given a further layer of robustness through the addition of a peer review by a third party with the relevant competence, particularly when conducting Complex Assessments. This should remove any final doubts regarding acceptance of the assessment.
By following the PFPF guide as a minimum, and adding a further peer review stage for complex assessments, the FPA believes that a robust, technically justifiable assessment is possible and that there is a place for such assessments within construction. The process must be part of a quality system maintained by the organisation providing the assessments so that internal and external audit checks can be conducted.
The AILOT report is available here.
Fire & Risk Management is the UK’s market leading fire safety journal, published 10 times a year, and is available exclusively to FPA members in digital and print format depending on your requirements. You can find out more about our membership scheme here.
Chris Miles is the Commercial Director at the FPA.