Government-funded research may help airport screeners pick out dangerous objects from the clutter of items in carry-on luggage.
We’ve all stared at a bookshelf, or into an open refrigerator, unable to spot an item we want, only to find that it was right in front of us all along. Airport security screeners face the same quirk of human perception, but for them it’s a deadly game of spotting threats amid the clutter on checkpoint video monitors.
The U.S. government has been funding research by visual perception experts in an effort to find out why we see some things and not others. Officials hope to apply findings in the field and in continuing training programs so that airport screeners can better detect suspicious objects.
Preliminary study results show that screeners are better able to spot specific items they see frequently and that screeners are most effective when they search for a single type of threat, such as guns or explosives, as opposed to multiple threats at once.
Jason McCarley of the University of Illinois tested subjects’ ability to spot threat objects—in this case knives—in typical suitcase x-ray images. Inexperienced test subjects were shown 240 typical, cluttered baggage x-ray images, with a knife image present in 60 of them. The subjects were not given a time limit but were instructed to make each determination—and state it—as soon as they could.
Throughout, McCarley also monitored subjects’ sight lines using an eye-tracking device to determine where subjects looked when they did and did not see a target.
“We know whether they looked at it, and then we know whether they saw it,” McCarley says. “Sometimes they don’t look in the right location, and other times they will look right at it and still not ‘see’ it. It’s too well camouflaged or too well hidden” among other items in the bag.
Success rates grew from 77 percent to 89 percent over four practice sessions. In a fifth session, evaluators used a different knife image to test viewers’ capacity for perceptive “generalization.” Success ticked down only slightly, to 85 percent.
A study conducted at Boston’s Brigham and Women’s Hospital using similar methods found that search success plummets along with the frequency of the threat, though there is not a one-to-one correlation. When a threat was present in half of the images shown—twice the prevalence of McCarley’s tests—subjects failed only 7 percent of the time. When the threat object was in 10 percent of the images, failure rose to 16 percent.
For airport screeners, the challenge is compounded by the rarity of threat objects in the real world. The U.S. Transportation Security Administration (TSA) has long tried to address the problem through training using a system called Threat Imaging Projection (TIP). The TSA uses the TIP system to both train and test screeners while they’re on the job.
TIP software superimposes simulated threat objects on actual passenger baggage images as they pass before screeners. When a screener spots a threat object, he or she strikes an alert button. If the threat object was simulated, the system records the hit (or miss) without event. If the threat spotted was real, the system alerts screeners.
The TSA also runs tests with actual threat objects. In both of these test cases, screeners receive immediate feedback on their performance.
TSA screeners are also required to participate in at least three hours of continuing training each week to keep them sharp in the absence of real events, according to agency spokeswoman Amy Kudwa.
Cathleen Berrick, of the U.S. Government Accountability Office, who authored a 2005 report critical of the lack of airport screener training, said TSA has made great strides since that report was issued. “That said, it’s still a problem. They still don’t have all the training they need,” Berrick says.
Another series of tests was conducted by Kyle Cave of the University of Massachusetts and Tamaryn Menneer of the University of Southampton, England. Funded by the U.S. Transportation Security Laboratory, the research focused on dual-threat screening with images featuring metal threats (guns and knives) and “organic” threats, such as explosives.
In this experiment, threat items were present in only 20 percent of the test frames, but items in the frames did not overlap as they did in the other experiments (and as they would in a typical baggage x-ray).
Single-target success rates mirrored McCarley’s: 88.9 percent for guns and knives, 87 percent for explosives, Menneer says. When screeners in the experiments were charged with looking for both threats, however, their success rates dropped to 83.4 percent.
“So whenever it’s feasible, you should have different people looking for different items,” Cave says, suggesting use of different screeners for metal and organic threats.