17 Aug Challenging the Reliability of Drug-Dog Alerts After Florida v. Harris?
Have you ever watched a trained detection dog at work? It’s an impressive sight. As we all know, the canine nose is one of the most sensitive chemical detectors the world has ever seen, and dogs can be trained to deploy it to sniff out just about anything. When I worked for the government, I spent a fair amount of time at ports of entry along the Mexican border, where customs officers walk detector dogs through the pre-primary lineup of vehicles, looking for drugs concealed in hidden vehicle compartments. Smugglers would go to great lengths to mask the smell: for example, we might find a few kilos of meth that had been vacuum-sealed in plastic, then coated with motor oil or detergent, wrapped in layers of duct tape, and submerged in the gas tank. How on earth, in the midst of that olfactory hubub, can the dogs pick up the few meth molecules that slip out? But they can.
Here’s the thing about that “can,” though. The fact that we can do something doesn’t mean we always do. My college roommate once dunked a basketball. That was many years ago, but to this day, he is known to say that he “can dunk.” Something similar is true for athletes still in their primes. They can do amazing stuff, but they don’t succeed every time they try. The best hitters in baseball get out seven times out of ten. That’s why all of the important sports statistics track success vs. failure at a given task over hundreds or thousands of attempts. Without that data, it’s impossible to compare players. You wouldn’t pick a guy for your fantasy baseball team just because you once saw him hit a towering home run. You want to know what his performance looks like over thousands of at-bats. Real sports executives have long since adopted this “money-ball” approach, looking to statistics to see whether impressive individual short-term results are really representative of underlying fundamentals. They demand long-term statistical analysis of a player’s performance before signing a contract.
Now back to those drug dogs. Dog sniffs have a special place in the law of criminal procedure. If a dog sniff is done in a public place, it is not, in the eyes of the law, a “search” for purposes of the Fourth Amendment. What that means is that the police can do a dog sniff pretty much anywhere, anytime, on anyone, for any reason or no reason. (There are some limits—for example, the police can’t come onto your property for a pretextual “knock-and-talk” whose real purpose is to sniff the air coming out the front door; and they can’t delay releasing you from a traffic stop to wait for a drug dog to arrive.)
So take your average, run-of-the-mill traffic stop, for an average, run-of-the-mill traffic violation. If the police get the dog to the scene reasonably quickly, they can run the dog over the vehicle without any basis whatsoever for believing there are drugs inside. Then if the dog alerts, that alert, by itself, constitutes probable cause. And once you have probable cause, you can make an arrest, or, if the alert is to a vehicle, you can search the entire vehicle without a warrant.
Now why, you might be asking, does the dog alert in itself constitute probable cause? The answer is pretty clear: because courts accept the government’s assertion that drug dogs are extraordinarily accurate. No one doubts that drug dogs have the physiological ability to sniff out concealed loads. But what about performance statistics? How has the particular dog in question performed, in thousands of repetitions, over time?
Police departments generally do keep such records, so the data exists in most cases. The question is whether our courts want to use it. In Florida v. Harris, 133 S.Ct. 1050 (2013), the defendant challenged a search based on a dog sniff. He argued that the court should look at the statistical track record of the dog that had sniffed his car, and determine whether that dog was sufficiently reliable for its alert to serve as per se probable cause. The Florida courts agreed with him, and held that trial courts must make an individualized probable cause determination based on the statistical performance history of the dog in question. In particular, the court said, the government had to provide the dog’s false-positive rate—the percentage of alerts where there actually were no drugs.
The ruling made sense to me. If data exists, we should use it. And the probable-cause rule for dog alerts is based on an assumption of extremely high reliability. If that assumption is wrong, the rule is wrong. So if drug-dog reliability is actually variable from dog to dog, then we need to know about it, and that means we need to look at individual performance history.
In fact, it turns out, drug-dog reliability does vary significantly from dog to dog. Controlled studies have shown this, and also identified one cause: psychological cues from the human handler. Dogs are not just noses with four legs. They are great smellers, but they are also extremely empathetic, and are acutely attuned to the emotional and psychological state of their human handlers. Police dog handlers develop very close bonds with their assigned dog, and vice versa. Dogs want to please. They really, really want to please. And they’re very susceptible to conditioning by reward.
Florida hated the Harris ruling, and challenged it in the Supreme Court. And the federal government joined in on Florida’s side. The government’s position was that it should be enough for a police official to testify that the dog passed the department’s certification program, courts should not be impose a performance-data requirement as a condition of a probable cause finding. The Supreme Court agreed with the government, holding that “evidence of a dog’s satisfactory performance in a certification or training program can itself provide sufficient reason to trust his alert.” The Fourth Amendment, said the Court, does not require consideration of the performance statistics of the dog.
The Court dismissed the concern that passing a “certification program” might not ensure accurate performance in the field: “law enforcement units have their own strong incentive to use effective training and certification programs, because only accurate drug-detection dogs enable officers to locate contraband without incurring unnecessary risks or wasting limited time and resources.” This comment strikes me—as it probably does you—as hopelessly naïve. Since a dog alert to a car gives an officer immediate carte blanche to search the entire car, there’s an obvious incentive to maximize the alert frequency of your dogs. And the Supreme Court—by rejecting Florida’s attempt to require performance statistics—has now multiplied that incentive a hundredfold. For example, why even keep full performance data if you aren’t going to be required to provide it?
In fact, that’s what the officer in Harris did: he only kept records when his dog’s sniffs resulted in arrests. Seriously? That’s like a pitcher asking for a multi-million dollar contract based on his ERA and strikeouts in his wins, but refusing to say anything about his losses. Pick whatever metaphor you want (dating history, GPA, trial record), it only gets more ridiculous. You can’t keep stats on only your victories, then present that as evidence of your success. If you solicited investors for your company by showing them account books that proudly displayed your profits but omitted all your losses, you’d go to jail.
A recent circuit case applying Harris, United States v. Bentley, No. 13-2995 (7th Cir., July 28, 2015) shows what happens when the courts don’t insist on evidence of reliability: officers simply train the dog to alert as often as possible. Think about it—if you know your dog is a “free search” card, and you won’t be required to put on proof of reliability, then what you want is a dog that alerts every single time. These traffic-stop cases arise when the officer wants to search the car, but doesn’t have probable cause to do so, and can’t get consent. If the officer had found any basis for p.c., he already would have searched. A false positive, in this context, doesn’t force officers to waste time on searches they wouldn’t have done otherwise—they only call for the sniff when you already wanted to search. That’s why the Supreme Court was naïve to think that departments would never encourage false positives.
This isn’t speculation. The officer in Bentley testified that the dog was only called out in situations where the officers already wanted to search the vehicle, and that the dog’s handler rewarded him every time he alerted—with no differentiation in reward between real positives and false positives. The cops in Bentley, in other words, were doing exactly what the Supreme Court in Harris cavalierly assured us would never happen.
Rewarding the dog for alerts regardless of their accuracy is the law-enforcement equivalent of the participation trophies handed out in kids’ soccer leagues. You can pass out all the trophies you want, but don’t point to them as evidence that your kid is the next Messi. And the dog got it. He knew what the handlers wanted and what it took to get his treat: this dog alerted an astonishing 93% of the time. And only 59.5% of those alerts were accurate. Fully 40.5% of them were false positives.
How does the dog’s accuracy compare to the accuracy of the human hunches that led its call-out? To figure out the accuracy rate of the underlying hunches that led to the call-outs, take the total number of call-outs (100%), subtract the 7% where the dog didn’t alert, then subtract another (40.5 * (1/.93)) to get the overall percentage of false-positive police hunches. You get a human false-positive rate of 43.67%. Thus the human hunches were accurate 56.33% of the time, while the dog sniff was accurate 59.5% of the time. So the dog was statistically about as accurate as a human hunch. That should be important, because a human hunch is not probable cause.
It’s hard to imagine a worse fact pattern for the government than Bentley, but unfortunately the Seventh Circuit didn’t think it was bad enough to suppress the evidence. Hey, 6 out of 10 ain’t bad, right? The court worried aloud about a “race to the bottom,” but that’s just talk: the case showed that race is already underway, and the decision is only going to push all the states in the Seventh Circuit down even further. If the court was really worried about a race to the bottom, it would have ruled that this dog is not reliable enough for its alerts to constitute probable cause.
There are two pieces of good news, though. First, the Supreme Court ruling Harris leaves room for defendants to put this evidence on if the trial court allows it. Second, there are a lot of courts in this country– twelve federal circuits and fifty states. Not all those courts are going to see things the way the Seventh Circuit did in Bentley. Any defendant facing charges based on evidence found following a dog sniff should aggressively pursue discovery on the dog’s performance record. The more light is shed on variability in dog reliability, the more likely the courts are finally going to stop treating them as four-legged search warrants.
Latest posts by Caleb Mason (see all)
- Reflections From The Past Year of Criminal Defense Practice:Lesson Two - January 12, 2017
- Reflections From The Past Year of Criminal Defense Practice: Lesson One - January 5, 2017
- Department of Justice Directs Forensic Trial Witnesses to Stop Claiming “Scientific Certainty.” - September 20, 2016