Several portable MIR units developed at LLNL underwent extensive evaluation during the workshop. They proved to be capable of detecting human motion through at least eight feet of concrete. The motion detected can be ambulatory, respiratory, or cardiogenic. Almost as important, the devices also showed the ability to locate void spaces within concrete structures (i.e. areas where humans might have survived).
The ability to reach and extricate viable victims in collapsed or severely compromised structures is, of course, predicated on locating victims in a timely way. It is the desire of every responder within the rescue community to do the most good for the greatest number of victims of a disaster. Performing effective search operations as rapidly as possible is crucial if responders are to meet their mission requirements. The survivability of entrapped victims is greatest, at approximately 80%, when rescued within the first 24 hours of the collapse. Survivability falls rapidly with time, down to 36.7% if rescued on the second day and 19% if rescued on the fourth day of entrapment.
A critical element of speed is accuracy in locating victims. In a collapsed concrete structure, once a location has been selected, based on some probability that a viable victim may be present, response operations may take a rescue team 8 to 12 hours, or more. Dedicating a rescue team to a half-days work, only to find a dead victim, a family pet, or nothing, reduces the chances of survival for many other victims of the disaster.
Today’s search and rescue specialist has a relatively ineffective assortment of tools for locating victims. They are typically referred to as “electronic search” tools. These electronic devices include acoustic/seismic listening systems and/or electronic viewing equipment. The usual application of an acoustic/seismic device involves the deployment of an array of two or more acoustic sensors around the perimeter of a building or rubble pile. For these to be effective the ambient noise must be very low and the entrapped victim must be generating a distinguishable noise, such as tapping on a solid object that is physically linked in a manner that the sound can be transmitted to the sensor. Unfortunately, for the ambient noise to be low enough to be “heard,” most operations have to be suspended. That is simply not practical in most large-scale collapsed structure rescue operations.
There are several electronic viewing devices available to the emergency responder. Flexible fiber optic scopes comprise one set of devices. Another device is the search cam. Probably the best device in this category is the Infrared/Thermal Imaging Camera. It provides an effective way of seeing through smoke and dust using infrared. An infrared imaging system was used with moderate success in the very smoky environment of the World Trade Center incident. The major disadvantage of electronic viewing devices is that a rescuer must be very near a victim to use them. Therefore, field use is generally limited to determining whether a victim is on the other wide of a wall or floor.
Recent advancements in miniaturized electronic circuitry and ultra-wide band micro-power impulse radar allowed the LLNL MIR devices to be compact and have modest power requirements. The radar signals showed extremely good penetration through nonconductive material like concrete. It is known that the radar will not penetrate conductive materials like solid metal or water. It can also be used to locate void spaces within a collapsed structure.
At the Workshop, the LLNL MIR team tested several versions of the technology. Prototypes of MIR system in three families and four frequency ranges were tested. These included (1) fixed-range gated motion sensors, (2) swept-range rangefinders, and (3) dual-rangefinder "urban-eyes" tracking systems. The four frequency ranges were from low (900 MHz) to high (6 GHz), where lower frequencies provide greater penetration of walls or rubble and high frequencies provide better resolution and detection.
One of the reasons why technologists and responders need to work directly together was illustrated by a responder suggestion during the MIR tests. While the LLNL technologists had previously concentrated on detection of victim life signs along, it was suggested that finding void spaces within collapsed structures was almost as useful a goal. Without void spaces there is minimal chance of victim viability. Even if life signs cannot be detected within a void space there is still some chance that there is a viable victim within that space.
This technology was extremely well received by all of the responders. It was the first device most had seen which offered a potentially order-of-magnitude improvement in a fundamental US&R task, that of victim location. It probably generated the most excitement of any device tested during the workshop. Rated by the responders on a qualitative scale of 1 to 10, the MIR potential is 10.
The biggest problem to solve before general field deployment is in ease-of-use by non-specialists. While the device is light and portable, it is very sensitive to positioning, and the data display is difficult to interpret. It needs a user interface that “translates” radar “squiggles” into a simple yes/no/maybe indication of motion compatible with a human being as well as a void-detecting algorithm.
Our best guess is that 1.5 to 2 years of work is necessary to develop a suitable unit for general field use. Such an initial deployable unit will probably be adjusted for minimal false negatives (and therefore probably considerable false positives) with further “tuning” to come from extensive field use.
Two different rapidly deployable, wireless network systems (from JPL and a small company, Tri-Sentinel) were evaluated during the workshop. Wireless micro-networks are capable of providing 3-D location and tracking of responders within a building/structure as well as providing a mechanism for collecting and transmitting other sensor data. It would be very beneficial if all monitoring, personnel tracking and communication needs could be incorporated into small sensors. With multiple sensors in a wireless system being deployed as individual units, incident management would be able to monitor the locations of the responders as they progressed through the structure, monitor the air, and even utilize them as radio repeaters for enhanced communications. If sensors are small and light enough (as were the test units from Tri-Sentinel) they can be attached to personal protective equipment or clothing, or incorporated into the tools and equipment that responders already have to take with them.
Both systems functioned well and they were well received by the responders. In the past, concerns have been expressed by a small element of the emergency response community that tracking devices and sensors should not be deployed as “bread crumbs”. This did not appear to be of concern to the responders that participated in the workshop. The widely-held opinion (with no dissent from the approximately 60 responders present) was that having a reliable RF network within a disaster scene was of such importance that deployment (especially of lightweight self-deploying devices) was a minor inconvenience compared to the value of the network. The opinion of the technology experts in sensors and RF communications present was that current technology could not provide a reliable network without at least a significant number of such deployed devices.
The Tri-Sentinel system consists of an Incident Information laptop and a small device that is worn by each responder to instantly form a mobile wireless network at any incident area. Each device contains components that enable the 3D location and tracking of responders and allow for incident area communications among responders. In addition, the pre-programmed devices also automatically provide Incident Management with information on the identification and capability of each responder (Credentialing). The devices can also be outfitted with various sensors to monitor the environment in which the responders are working, or the vital signs of the responders, and provide the information to the command post in real time. During the workshop the prototype devices were able to remotely track the location of the responders around the incident area and within a concrete structure. Their product also provides incident command and control support that incorporates an intelligence gathering functionality that processes sensor data and provides useful information for the Incident Commander in the form of alerts and warnings as well as an overview of the incident area conditions.
The JPL Sensor Web wireless micro network uses a deployable multi sensor infrastructure. Responders deploy sensor pods as they advance within a building/structure. A mother/portal pod is stationed outside of the building/structure and is linked to a command post computer via a 2.4 GHz radio. A PDA can be used to monitor the sensors on the system by a responder within the building/structure. The purpose of the Sensor Web pods is to provide an infrastructure for responder tracking. A Personal Identification Device (PID) is attached to each of the responders (size of a small coin). The PID communicates with the pods. The Sensor Web system is capable of integrating any compatible sensor device into its communications network. Sensors can be used to monitor the environment in which the responders are working. Similar Sensor Web systems from this group have been successfully deployed and left unattended in various harsh environments for long time scales (months or even years). This technology is therefore already field-tested and robust. During the workshop the Sensor Web pods were able to remotely locate responders relative to their proximity to a pod. They were also able to monitor the atmosphere/environment within the collapsed structure environment.
Wireless micro networks were very well received by all of the responders at the workshop. The problem they solve is nearly as critical in emergency response as the victim-location problem discussed above with a consensus potential of 10, also. Both the JPL and Tri-Sentinel systems were viewed as very close to field-deployable, mainly dealing with RF issues, additional ruggedization, and test in realistic emergency response environment, as well as making sure that a full suite of interoperable sensors is available to “plug and play” in each network. Our best guess is that less than one year of development is needed before wide field deployment.
UAVs potentially provide emergency responders the ability to view, survey, and analyze a disaster scene from the air. UAVs can be used in wild land and urban fire assessment, tornado damage assessments, earthquake damage assessments, tracking of hazardous material plumes, and victim location. As pointed out by the responder group from North Carolina, UAV surveillance would be very useful to locate victims in houses or on rooftops in flooded areas. Beyond simple visual analysis, UAVs can serve as a highly mobile platform for any form of sensor.
Three different UAV technology groups participated in the workshop. The Autonomous Rotorcraft Project (ARP) at ARC is developing an intelligent, autonomous interface for small unmanned helicopters. (The most typical platform is a Yamaha helicopter about the size of a Jet-Ski that is in common use for crop dusting in Japan.) The device is capable of carrying a substantial payload and is extremely maneuverable, but is very difficult to fly, requiring both helicopter pilot and remote operator skills.
During the workshop, the ARP demonstrated an autonomous Intelligence, Surveillance, and Reconnaissance (ISR) mission capability. To initiate the mission, the ARP operator selected targets-of-interest (TOIs) by clicking a mouse on a moving map. At that point the autonomous software calculated a target sequence that provided optimum coverage, viewing locations for each target (after accounting for sun angle), and navigation way points for traveling to those viewing locations while avoiding known obstacles. Once a route had been planned (typically within 100 msec), the vehicle proceeded to autonomously fly the mission while returning digital video to the operator, adapting to targets and obstacles that were added and deleted on the fly.
The ARC Earth Science division in conjunction with the Clark University UAV Applications Center flew an Unmanned Fixed Wing Aerial Vehicle, with remote sensing capability, during the workshop. The small UAV carried a high-resolution digital camera system that allowed real time surveillance of the disaster site and the San Francisco Bay coastline. The video and camera provided telemetric data to the command post for analysis by the first responder staff. During one of the workshop scenarios, the Command Staff identified intruders in a raft attempting to compromise perimeter security from the UAV video. In addition to the digital camera a thermal infrared video capability was demonstrated to detect hot spots in and around the collapsed structure. An atmospheric air-sampling sensor was also mounted on the UAV to provide data to determine the constituents of a simulated hazardous chemical bloom.
Finally, a very small (2 foot rotor diameter) and lightweight unmanned helicopter, called the Flying-Cam, was demonstrated. This vehicle is in active commercial use as a movie camera platform (and has seen use in a large number of recent movies including James Bond and Harry Potter movies). The vehicle can maneuver into very small spaces, but currently requires a two-person team for control.
The rotorcraft UAVs have considerable potential for specific site surveillance and monitoring. Some responders thought the ideal vehicle would be a cross between the Air-Cam because of its size and deployment ease and the intelligent control system built for the larger Yamaha helicopters. Such a smaller craft would indeed be useful for indoor operations, but we also believe that the larger helicopters can handle higher winds, and light rain, and would be a good asset at its current size and payload capacity for outdoor use.
The major advantage of the fixed wing UAVs is their ability to perform very long-term (in some cases with solar powered vehicles many days) scene survey at relatively high altitudes and to carry fairly heavy sensor suites.
The major concern about UAVs for emergency responder use is their perceived cost and difficulty to operate. We believe the former can be reduced through use of existing commercial platforms (like the Yamaha helicopters) and the latter ameliorated by technology like that demonstrated by the ARC collaboration between UAV and intelligent systems specialists.
Of particular importance to the utility of UAVs is the development and integration of appropriate sensors with the vehicles. As such the path to routinely useful technology is more complex than with many of the other technologies evaluated at the workshop. However, we were pleasantly surprised by the high level of responder enthusiasm for the potential payback of UAVs, at least in the relatively long-term. Therefore, we believe the long-term importance of UAVs rates a 9 on our scale (with long-term defined as 3 three to four years of development and; we rate one to two year value as a 4).
A portable human health monitoring system, developed by ARC and the Stanford Center for Space Biological Technologies for astronaut use, was tested during the workshop. It appears to have great considerable potential both for the in situ determination of victim health and the monitoring of responders.
Monitoring the vital signs of responders working in a hazardous environment requires small, lightweight, and robust sensors and electronics. A body-worn system should be unobtrusive, noninvasive, and easy-to-use. It must be able to log vital signs data for several hours as well as transmit it on demand in real-time using secure wireless technologies. Ideally, such a device will also relay responder location and act as part of an intelligent micro network (as described above).
The ARC/Stanford LifeGuard system was subjected to rigorous proof of concept testing during the workshop. The current system is the result of hardening the core technology of the Smart Health Care Management System, a modular monitoring system that combined user-friendly lightweight medical sensors, Pocket PCs, a secure wireless network architecture, and intelligent software for data display, storage, and analysis.
The Lifeguard system is comprised of physiological sensors (ECG/Respiration electrode patch, Pulse Oximeter, Blood Pressure Monitor), a wearable device (CPOD), and a base station (Tablet PC). The wearable device acquired and logged the physiological parameters measured by the sensors of several responders that worked within the simulated collapsed structure. The data was streamed to the base station for display purposes and further processing. LifeGuard measures 2 channels of ECG, respiration rate, heart rate, activity, skin temperature, SpO2, and blood pressure. All sensor data is acquired by a single wearable device - the CPOD. This device performs signal conditioning and data acquisition, data logging, data transmission to a base station (wired or wireless), and status display functions.
Overall reaction to the device was strongly positive, particularly in its potential to serve all three functions, of responder health monitoring, responder position location, and victim health determination. Several health professionals present during the test found it a potentially key device for monitoring responder vital signs. Overall we give it an 8 on our scale of importance and utility. The major work that needs to be done is to make the device perhaps 50% smaller and to fully ensure that it is rugged and reliable enough to survive the harsh conditions of a responder crawling through concrete rubble. We estimate that this work can be done and the unit ready for full field deployment within one year.
While robots have been previously tested in many emergency response situations (and there is even a longstanding “Robo-Rescue” international competition), they have been of little practical use to-date in all but very specialized situations (like explosive device inspection in relatively easy to reach settings). That is mainly because of three reasons: difficulty in traversing hazardous terrain (like concrete rubble), difficulty in maintaining communications and power links (tethers don’t survive very long at disaster scenes), and far-too-complex human interfaces. All of these issues were evaluated and discussed at the workshop.
CRASAR, ARC, and CMU all provided robots and personnel for evaluation at the workshop. The devices and sensor payloads tested included:
The tests of all the robots were somewhat of a mixed bag. Due to their low ground clearance and high center of gravity, even the smallest concrete debris stopped all of the robots’ progress throughout their use during the workshop. In some cases the debris rendered them unable to return to the operator. Since all were powered and communicated through tethers, all were rendered inoperable by the common breakage of those tethers.
In spite of the robot’s drawbacks, the participating responders believe that robots can become an excellent tool for response and recovery operations. Robots may have some application as a mechanism for setting up and maintaining communication links. The ability for robots to deliver victim monitoring sensors and nourishment/water to victims would greatly benefit rescue operations. A definite benefit would be if the sensor could determine whether the victim had crush syndrome, thereby helping set the priority for human rescuers to reach the victim.
The consensus view at the end of the workshop was that current rescue robotic devices were at best a 3 in overall utility during emergency response (except in the aforementioned specialized cases). Potential utility is very high (probably a 10 in our scale), but much engineering and human/machine interaction work needs to be done in terms of all three issues discussed above. We believe that it will be at least four to five years before enough of those problems are solved to make robots a routine emergency response technology. However, we do note that the work to be done is already a major part of efforts at NASA, DOD, and DOE (where hazardous environments and human/machine interaction are also the key issues). We strongly recommend that DHS take advantage of those efforts by partnering and co-funding so they may be applied to emergency response problems in parallel with those agencies’ problems.
Virtual environments are certainly familiar to a generation raised on video games and movies like “The Matrix.” They have the potential to be excellent responder training tools; for example, helping responders learn the complex actions needed to breach walls or floors, how and where to build shores, what are signs of environmental hazards in collapsed structures, etc. In addition, they are a possible method for real-time planning on how to navigate the interior of target hazards, particularly if integrated with sensors relaying real-time data on the location of responders and victims.
One was aimed specifically at the problem of shore construction (although its potential applicability is to improve interactions between personnel working within a collapsed structure and those performing support roles on the outside). The second was targeted at training in general. The third system was a more general technology for utilizing real-time data to continuously update a virtual environment.
In a collapsed structure environment it is not unusual for responders to build shore systems to stabilize loads, and to support floors and walls. A two-team approach is often taken to facilitate this process. One team works within the structure to construct the shore and the other team works at a "cut station" outside the structure to cut the lumber to the required length. Any miscommunication between the two teams will result in shore components being the wrong size or sent to the wrong teams. The ARC Smart Systems Research Laboratory (SSRL) Shore Tool is designed for responders constructing the shores and the cutters cutting the lumber to size. It provides a way for responders inside a structure to wirelessly upload shore requests, including dimensions and digital images, to a cutter on the outside. Once received by the server, the shore request is time-stamped, archived, and available for immediate review by the cutting team. Command center staff can use the shore reports and images to monitor progress and also to consult with structural engineers (if available) to assess whether a building is in imminent danger of collapse. When a shore report is uploaded by a responder the location of the responder is also uploaded. This functionality was simulated at the workshop and is not yet fully operational in the Shore Tool. This location information is then linked to a 3D virtual environment of the scene so that squad leaders and command center staff can see where a particular shoring request came from in the structure.
The responders that participated in the workshop really liked the idea of a shoring simulator that they could practice building virtual shores and then, using the physics simulation, find out how strong their shores are. Shores can be designed by the responder to fit any configuration and virtually “tested” before spending the time and labor necessary to build the physical shore.
The ARC BioVIS Technology Center field demonstrated the other two virtual environment systems described above. First Responder Simulation Training on a Virtual Interactive Environment Workstation (FIRST-VIEW) supports simulations that provide interactive training of specific events, locations, scenarios or procedures for first responders. The system clearly has many applications beyond collapsed structure rescue. For example, it could be used by fire fighters as an on-line trainer for learning how to operate the various engine pumps on fire trucks, or to train operators of an emergency operations center.
The Coordinated 3-D Incident Site Mapping program is a technology that takes input from any video source recorded in or around an incident site. Videos can come from responders, eyewitness, fixed cameras, robots or UAVs. The program processes the video in a rapid fashion to extract accurate 3-D information and produce a continually updated site map that provides a basis for complete incident site situational awareness.
In general, responders were impressed with the training potential of virtual environments, for the shoring problem described above, and for many other emergency response areas as well. The consensus view was an 8 in importance. We believe a strong development effort would lead to fully field-deployable technology within 12 to 18 months.
There was less consensus as to the readiness or value of real-time systems for modeling and helping make decisions about operational situations. This was because of the complexity of data gathering, communications, and modeling during the chaos of a disaster scene. The potential is high, probably an 8 also, but we estimate at least four to five years of development is needed before value is readily seen.
The breaching of walls made of concrete or steel in rescue is an extremely laborious, time-consuming, and frustrating task for responders. It normally takes two to four hours to perform a single breach and in the process it can fatigue entire squads of rescuers. The tools that are presently used are large, cumbersome, noisy, and in some cases very dangerous to operate in confined spaces (usually jackhammers, drills, and concrete saws). US&R responders have a urgent desire to move to tools that make this job quicker, easier and safer.
Lawrence Livermore National Laboratory (LLNL) is one of the nation’s leading sources of expertise on laser technology. Prior to this workshop, LLNL personnel met with the authors of this report and described their work in developing potential rock-cutting tools for the Yucca Mountain Nuclear Waste Depository. Both parties saw the potential for helping with concrete wall breaching. ARC provided samples of concrete with embedded re-bar to the LLNL team who tested portable semiconductor lasers on the samples. LLNL staff reported on their tests at the workshop, showing a video of the initial performance.
The laser had two effects. First, it caused rapid “spalling” (cracking of the concrete surface) on the six-inch thick walls samples provided. Second, the laser was capable of very rapid (10-20 second) cutting of imbedded re-bar. Further tests are needed to determine whether the spalling effect will have a qualitative difference in reducing the total time for breaching.
The laser breaching system has excellent potential if it could be made smaller, lighter, easy to use, and if it could be made to work quicker than the current power/hydraulic tools that are available. We are also investigating a JPL-developed ultrasonic drilling technology (originally developed to rapid and low-power drilling on planetary surfaces). The JPL technology may be complementary to the LLNL laser in developing an entirely new system for concrete breaching.
All of the responders felt that the value of the tool would be 10 if the complex operational issues could be solved. This is because breaching time, along with victim location time, are the two most critical items in rescue of live victims. Moreover, they felt that a broad range of other approaches should be applied to this critical problem (where the technology has not improved much since World War II). We believe that it will take at least 4-5 for field deployment of such a tool, although field-testing could start much sooner (within 1-2 years).
The atmosphere that responders have to work in is often both hazardous and dynamic. A system that can constantly monitor and evaluate the constituents in the air and provide the information real time to the command post or Emergency Operations Center would be extremely valuable in response and recovery operations.
PureSense is a small company located on the ARC-developed NASA Research Park. The PureSense air monitoring system provides a fully-integrated structure that can effectively monitor, analyze and alert air quality monitors and emergency responders to potential contamination—in real time —rather than the longer periods typical of traditional laboratory analysis. The system consists of three main components:
The system was field-tested during the workshop. Responders in general thought it was well designed, but mainly a tool for the incident commander rather than individual responders. The commanders present rated it a 7 or 8 in terms of importance for major disaster scenes. The system is deployable today, but a version specialized to the rapid set-up and customization needs of an emergency scene would probably take a year to develop.
An interoperable, fully integrated Mobile Command Center, developed by ARC DART was used during the entire set of workshop exercises and tests. This Command Post (CP) was planned to maximize the ability to “showcase” the data being generated by the respective technologies. The CP was not established to be part of the formal technology demonstration. It was intended to provide a proof of concept of what a high tech command post of the near future might look like.
Two workstations were set up to record the work of the Information/Plans Section and Operations Section in the Incident Command System (ICS) model for a CP. No attempt was made to establish workstations or capture data for the Administration Section or Logistics Section that would normally be designated in a CP or Emergency Operations Center (EOC).
To effectively capture data in the CP log, a web server was set up to host the Ames version of the WebEOC system. The simple WebEOC position log capability was used by the Operations Section staff to record data in the WebEOC. The Information/Plans Section players used the WebEOC ICS forms to assist the Incident Commander (IC) in tracking resources and personnel.
To support proper display of the technology system data, the CP had four high-resolution projectors and two large screen LCD monitors. Each monitor was capable of displaying four separate images.
An attempt was made to integrate all of the CP devices on an IEEE 802.11g wireless network. As much as possible, the intent was that most of the technology systems would be capable of generating wireless data for display by the CP devices. The bandwidth capability of 802.11g was stretched to just integrate the CP equipment into a wireless LAN and immediately crashed when data started to be received from a technology system. While this was disappointing, it did not come as a total surprise.
Multiple video cameras were used to monitor the activities of the responders, technologists and devices such as robots. All of video data was recorded. Initially the intent was to have the video data transmitted via a wireless LAN. The concept was dropped as the exercise began.
A GPS GeoMapping device was used to create a very simple map of the N267 training site, with relatively accurate (< .5 meter) data and integrate this view into the Center GIS. None of the data from the technology systems was properly formatted to allow proper integration into the CP GIS display system. To properly display system data, a standard (interoperable) data format needs to be defined and used by all technology systems.
The CP system was highly rated by command staff because it provided real time accurate data of all the activities that were taking place on scene. The individual responders, as had been predicted, disliked it because they felt that they were being micro managed by the IC in the Mobile Command Center.
It is clear that better tools for Incident Commanders are needed to decrease the time necessary to begin productive operations and to integrate disparate teams at a major disaster scene. The IC’s opinion was that the technology demonstrated was a 9 in potential utility. Clearly, however, the human factors issues of properly managing (without micro-managing) emergency response specialists must be taken into account in any fully operational system. We estimate this work could mature within two years into the right balance of central management but individual responsibility.