THE MAGAZINE

Explosive Testing of Window Systems – the Underlying Double-Standard

By Kenneth W. Herrle, P.E., and Larry M. Bryant, Ph.D.

Although explosive tests can be configured to meet both the ASTM and GSA test standards, there are numerous differences between these two methods that must first be addressed.  Several differences in test requirements between these two standards include:

  • Interior high-speed photography requirements: Although the ASTM Test Method allows the use of high-speed video, it is not necessary for meeting the test method requirements. The GSA Test Method, on the other hand, requires that, as a minimum, interior high-speed video be recorded for each test specimen.
  • Pressure gauge requirements (both number and location): The ASTM Test Method requires the use of three reflected pressure gauges mounted on the exterior of each test structure and use of a free-field exterior pressure gauge located near one of the test structures. The GSA Test Method, on the other hand, requires the use of two reflected pressure gauges mounted on the exterior of each test structure and the use of at least one interior pressure gauge located inside of each test structure (or inside of each partitioned volume).
  • Requirements specifying the minimum number of test samples: The ASTM Test Method requires testing of a minimum of three samples for each blast load environment plus disassembly and measurement of a fourth sample. The GSA Test Method does not specify a minimum number of test samples (one is typically used), or that sample disassembly is required.

While these are just a few of the differences between the two test standards, they are provided as a starting point for the test sponsor to assure that the test provider is conducting the test in compliance with the correct test standard. The test standard selected for the project should be consulted for the full listing of requirements that must be met.

Prior to testing, the test sponsor should make an inquiry with the test provider regarding the adequacy of the test structures planned for the test. If the test structures do not meet the requirements, the validity of the test results may be questioned. The test sponsor should assure that all test structures are fully enclosed (on all sides) to prevent infill loads, and that the response limits of the test structures will be in adherence with the selected test standard when exposed to the target blast loads. Questions regarding acceptable test structure response limits should be raised if the test structures are constructed of materials such as wood or lightweight metal sheeting, as these materials are not commonly used in blast-resistant construction.

If multiple window units are tested side-by-side in the same test structure, make sure that provisions have been made to distinguish between the glass fragments generated by each window unit. This can be accomplished by using a different color glass for adjacent window units, or by physically segmenting the test structure into individual compartments to avoid fragment intermixing. Also be on the lookout for items that may affect the response of the window systems such as test structure framing components located directly behind window mullions or items located directly behind the window that may obstruct fragment flight.

An additional item that may compromise the validity of an explosive test, and should be addressed during pre-test preparations, is the failure to meet the minimum blast load requirements specified for the test. Typically, window systems are tested to meet a specific set of criteria which consist of load requirements for both pressure (psi) and impulse (psi-msec).  These load requirements are minimum baseline values for meeting the criteria. Post-test reporting of sub-baseline test loads as having met the baseline requirements, based solely on the rationale that the loads are within a specific percentage of tolerance, is unacceptable. Tests that do not meet or exceed minimum baseline values (both pressure and impulse) have not met the specified test criteria, and the validity of the test results in such cases is often challenged. Prior to testing, the test sponsor should verify with the test provider that the target test loads are expected to be either met or exceeded, and the test sponsor should request proof that similar loads have been achieved on previous tests.  Standard blast load calculations assume that the explosive event takes place at sea level and that the reflecting surface of the target structure is infinitely large. When calculating charge weight and standoff combinations for explosive tests, adjustments must be made for both the altitude of the test site (if not at sea level) and clearing effects on smaller test structures. Failure to do so will result in blast loads that are lower than predicted.

Both test standards provide a list of required items that must be included in the test report as a minimum.  The test sponsor should verify with the test provider that all test report requirements will be met.  Omission of any of the required items is in violation of the test protocol and compromises the integrity of the report.

Test Objectives

Prior to arranging for explosive testing, the final test objective must be decided.  Generally, there are two major categories of test objectives:

  • Proof testing
  • Data collection

Although these objectives have differing ultimate goals, they share many common benefits.  For example, although the ultimate goal of a proof test is to obtain a “pass/fail” rating for a given specimen and load combination, data collected from the test is also useful for other purposes, such as development and validation of analytical methods and software, as well as improvement of a tested product.  Such additional benefits may include:

  • High-speed video footage for analyzing the time-dependent response of window systems
  • Pre-test and post-test photographs and measurements for analysis/comparison of response between tested window systems, including failure modes and details

For example, high-speed video of the specimens during a test has been very useful in determining actual sequence and modes of failure in complex systems.  Such documentation has also allowed development and validation of analytical models. For example, the bite response model currently implemented in the national standard WINGARD series of window blast analysis programs evolved from test observations. 

Similarly, testing conducted for the primary purpose of collecting data may also provide a “pass/fail” rating as a byproduct of the test. 

Comments

 

The Magazine — Past Issues

 




Beyond Print

SM Online

See all the latest links and resources that supplement the current issue of Security Management magazine.