Some sorts of “evidence” in evidence-based practice seem to carry more weight (e.g., randomized controlled trials; RCTs) than others (e.g., case studies) in applied sport and exercise psychology research. In this article we explore some of the shibboleths of evidence-based treatment, and how some “gold standards,” such as RCTs (as they are often used or misused) may, when sub-optimally executed, provide only tenuous, incomplete, and confounded evidence for what we choose to do in practice. We inquire into the relevance and meaningfulness of practitioner-evacuated research and investigations that use flawed statistical reasoning, and we also ask a central question in evaluating evidence: just because some sorts of positive changes can be measured and counted in various treatment outcome research, do they really “count?” © 2016 Association for Applied Sport Psychology