Beta versions of sense-making systems violate privacy and finger innocent people.
I was floored when I heard that the government’s new Computer Assisted Passenger Screening System (CAPSS II) is already being tested on Delta Airlines passengers. Basically, the system gives all prospective passengers a color code–green, yellow, or red–based on certain information, such as criminal records, credit histories, FBI files, etc. Passenger with a green code proceed without fuss to their gates. Yellow means further screening is necessary. Red means they get turned away before even going through security.
We don’t know the formula for how this information leads to a color code, or how the information is gathered and verified. This leads to the suspicion among the general public that there is a certain amount of arbitrariness in the color codes, and the fear that they will be falsely identified as red despite clean records and credit ratings over 700. It may be better than the current racial profiling system, but it still leaves a lot to be desired. Some people are even boycotting Delta for accepting the challenge to test the system.
Many experts are saying the system is likely to be ineffective, fingering innocent people and letting the bad guys get through, because the data points are not all that significant. None of the 9/11 hijackers had criminal records or poor credit ratings. Some upstanding U.S. citizens have checkered pasts but have paid their debts to society. These arguments strike a chord with me. I have studied artificial intelligence (AI) and, specifically, sensemaking software for some time. A recent article in Technology Review only confirms my understanding of the various AI systems under consideration or in use by the government. While there is a lot of good work going on in sensemaking systems, no single system can sort through a landfill of paper scraps and correlate the few meaningful messages that yield a definitive yes or no on a suspect or terror scheme. And the best ones require state-of-the art supercomputers to turn all the random noise into reliable information.
The real question is, given the urgency of the task, at what point do we start using what we have and, by extension, violate civil liberties for the sake of homeland security? I don’t have an answer to this. My inclination would be to place a higher premium on civil liberties than we have since 9/11. But that’s just my opinion. Perhaps a more productive approach is to recommend technologies that tend to minimize invasions of privacy and false accusations and maximize the benefits to our homeland security. There are a lot of helpful things outside of sensemaking that we can do now. For example, upgrades of communications systems between the various intelligence bureaus have had an enormous impact on the effectiveness of those organizations, especially the FBI. I would be satisfied if our priorities focused on upgrading these systems and creating more openness between departments while testing sensemaking. Once the systems are proven effective at sifting out innocence and revealing true threats, then we can roll out these AI systems.
As far as the sensemaking systems out there go, the one that I’m most impressed with i2’s Analyst’s Notebook, which gives intelligence agents visual tools to tie seemingly unrelated events into a single timeline. The beauty of the system is that it does not make decisions for the agents; it only presents information in a way for them to draw the appropriate conclusions. According to the Technology Review story cited above, it is frequently used to brief President Bush on terror threats because of its visualization strengths.
One way the government can calm the outcry over CAPSS II is to do a better job of explaining how it makes its color-coding determinations. That means explaining how it works, rather than just rolling out a secret system. Americans are naturally suspect of secret intelligence systems and are prone to conspiracy theories. And the intelligence and defense communities do not have a good records here: see J. Edgar Hoover’s tactics, all the radiation experiments on unsuspecting citizens, etc. Just as openness between departments can increase their effectiveness, openness with the general population is also vital. Disinformation is one of the chief reasons for terrorism. If we opened up to each other and the world, a lot of the confusion and ill feelings toward America and Americans would dissipate, and large proportion of terrorism would be defused.
James Mathewson is editor of ComputerUser magazine and ComputerUser.com