From The Ergoweb® Learning Center

Counterterrorism Expert Gives Two Aviation Security Projects a “Thumbs Up”

In an interview with The Ergonomics Report™ in 2006 research psychologist Mark Koltko-Rivera, Ph.D., revisited points he had made on the psychological aspects of counterterrorism at a recent NATO symposium. Referring to the terror attacks in the United States in 2001, he asserted that technological savvy and psychology, intelligently used, could help keep America safe. Two security projects announced recently suggest researchers are thinking along the same lines. With one caveat, Dr. Koltko-Rivera sees promise in both. 

 

“Each of the [projects] involves the intelligent application of psychology in human factors work, specifically for homeland security,” he said in a follow-up interview with The Ergonomics Report™ in May. Dr. Koltko-Rivera is Executive Vice-President and Director of Research for the Professional Services Group, Inc., a Florida-based company.

 

One is a European project to develop intelligent security cameras. The other is a screening project, called Hostile Intent, under development at the Department of Homeland Security in the United States.

The formal name of the European project is ISCAPS (Integrated Surveillance of Crowded Areas for Public Security). Mainly funded by the European Union, the 2-year project involves 10 European companies and research institutes.

 

As reported by Reuters, the intelligent cameras rely on formulae known as algorithms to give computers a way to analyze video images and spot potential threats. These range from abandoned baggage to people loitering suspiciously. For the security staff, often monitoring images from dozens of surveillance cameras at once, the new technology offers the promise of picking out dangers that might otherwise be missed. It focuses on abandoned bags; erratic movements or loitering by individuals; suspicious vehicle movements; and “drop dead scenarios” in which people fall to the ground, possibly affected by smoke or some kind of attack.

To define “suspicious behavior” the researchers conducted extensive interviews with security experts. They used actors to play out scenarios and mapped these into computer algorithms. The next step was setting up cameras to monitor public areas over long periods to enable the system to build statistical models showing busy and slack times and typical patterns of movement – and then to spot anything that diverged from the pattern.   

The need to quickly analyze dense, complex images of people and objects presented the main challenge. The variations in the weather and lighting were added complications.

 

The developers stress that while the system can flag something suspicious, it will still fall to a human operator to make the final call – for example, whether someone is running for a train or sprinting to escape the scene of a crime.

 

They caution it could still be 10 years before ISCAPS is in place and robust enough to monitor large crowds reliably for a full range of threats, without triggering excessive false alarms. In the meantime, more limited versions could be introduced.

The goal of Homeland Security’s Hostile Intent is to help screeners identify people who should not be allowed to enter the country. According to the Department’s news release about the program, Hostile Intent is geared towards detecting and gauging physiological and behavioral indications of deception and bad intentions. These include signs of nervousness, such as body heat, perspiration and certain facial movements.

Some 400 million people cross the US border every year, the news release noted, and most of them have no hostile intent whatsoever. Because it is non-invasive, the technology is expected to be able to screen travelers without slowing down traffic or inconveniencing them

“It’s a game-changer,” says Sharla Rausch, division director for human factors at the at the department’s Science and Technology Directorate, which is developing the program. It will “help us get at the unknown threat without inconveniencing the good guy.”

Assessing the potential value of Europe’s ISCAPS, Dr. Koltko-Rivers pointed out that the one thing computers are good at is signal detection and its flip side, which is anomaly detection.

 

He noted that the program addresses the particular challenge of detecting vary rare anomalies. “Hundreds of millions of people [are] involved in mass transportation when no one is engaged in a terrorist incident. … Human detectors are particularly prone to fatigue in looking at things for great long periods of time without an event happening,” he explained. “That has always been the weak point of any kind of detection or surveillance that is involving a rare event – it will be very difficult because of the way the human brain is set up. However, machines are tireless.” He notes that they malfunction occasionally, but don’t suffer from psychological fatigue. “They can go through 300 million incidents of movements and say, ‘Well, that one is normal. Let’s go to 300 million-and-one,’ and just roll right along. This is exactly what machines were built for. And so I think that is particularly intelligent use of psychological technology.”

 

He describes the Hostile Intent program as being developed to detect and gauge the physiological and behavioral indications of deceptions and bad intentions.“In principle, [it] is an excellent use of psychological technology, because, again, terrorists – determined humans – will always be able to find some way to work around some technological approach, but no one can run away from the simple fact of being human being.”

 

The department is reticent about the nature of the tool that will be used and the indicators that will be flagged, he noted, but said his impression is that it is to be used by people.

 

He sees safety in the lack of specifics about the program. “I am not familiar enough, nor would they want me to be familiar enough, with the precise items that they are focusing in on, either in Hostile Intent or in the automated program to analyze visual images [ISCAPS].” 

 

His caveat concerns cultural ergonomics. The matter of culture is important in both programs, he stressed, and particularly in Hostile Intent, because “behavior, including habitual behavior, is culturally moderated.” Citing an example, he said that in our culture when a superior is talking to a subordinate it is considered a sign of respect for the subordinate to look at the superior right in the eyes. In many cultures it is exactly the opposite, and a sign of great disrespect to look a superior right in the eyes. “Now there are all sorts of culturally moderated behaviors that might work in here, and I would have to say they are going to want to validate whatever model they use across cultures because that is going to be an issue. … I hope they have somebody who is very well versed in cultural ergonomics to help them address these issues.”

 

Reading between the lines, Dr. Koltko-Rivera deduces that Hostile Intent is being used to train human observers rather than proceeding down a path to automation. He approves of that approach, but with one stipulation: “Those human observers need to be taught how to modulate what their expectations are according to culture.” A human observer could read hostile intent in a man looking down at his shoes while talking to a customs agent, he noted, but in a number of cultures the down-looking means no such thing. “And so they will have to pay attention to that. … Someone there has to be put on board for analyzing that material for cross-cultural validity.”

 

He describes inattention to cultural ergonomics as the place where American industry often goes wrong. “We assume that a modal American approach is the standard approach in the world against which all other behavioral indicators should be compared. That simply is not true, and never is it more [untrue] than in a situation at the borders. But with that caveat, I’m quite encouraged by both of these [programs].”

 

Dr. Koltko-Rivera rates the prospects for automating Hostile Intent as poor, but doesn’t view that as an obstacle to its potential effectiveness. There is always an advantage to introducing a human detector, he explained. Though they might not be able to explain why, “many human beings have a very highly tuned, highly developed sense [for] catching vibes.” Evolution is one possible explanation, he added. “There is a high adaptive value in being able to detect lying or hostile intent, so it doesn’t surprise me that that trait is selected for adaptation, and some people, of course, will have it more highly developed than others.” He sees high value in this trait for a policeman interrogating a criminal subject, or a border official responsible for determining “whether someone is coming into the United States on valid credentials and for a legitimate reason.”

 

There is particular value in “taking expertise and putting it in the bottle,” he noted, explaining that a very skilled interviewer can draw out of the human expert, through a structured interview, many of the informal algorithms that people use to do their expert performance. A chess expert, for example, may not be able to explain how and why he makes one or another successful move, but “the kind of people who design expert systems for an artificial intelligence can draw the expertise out of the expert and put it into more formal algorithms.” Applied, the principle would see interrogators deemed by colleagues to have a “sixth sense” called on to share their gift. A skillful review of their interrogation processes could identify the subliminal cues they recognize. “They [cue in] on certain aspects of the person’s presentation. It might be timbre of voice, it might be perspiration, it might be some subtle flutter of the eyelids … whatever it is, it can be identified and it can be taught to other people and … used to produce an artificial automated system … to assist other people who aren’t so good at it.”

 

Dr. Koltko-Rivera regards the successful automation of detection as a distant prospect, but if and when it is achieved, it is likely to work best when combined with human detection. “One of the things about the first system [ISCAPS] is that it plays to the strengths of the human detector and minimizes the human weaknesses, because the machine automatically skips over all of the scenarios where things are perfectly okay. [It] just focuses in on those situations where something appears anomalous, and then it calls the human being to come in and make that judgment. So that’s the best of all worlds.”

 

Sources: Dr. Mark Koltko-Rivera; Reuters, US Department of Homeland Security

This article originally appeared in The Ergonomics Report™ on 2007-05-17.