Judging Quality & The Survival of CX (Part 2)

Policy Debate Judging Frameworks

Just what is a judging paradigm in policy debate? There are a handful of resources on the web to give one an understanding of some of the schools of thought a judge may subscribe to and resources like the Wikipedia entry on judging policy debate serve as a reasonable starting point on the explanation of some of the major schools. Tabula Rasa (“tab”), Stock Issues, and Policymaker are listed, and others such as Performance and Hypothesis Testing are found in other sources. Other explorations on judging paradigms can be found in Rostrum, such as the ever valuable Bill Davis article “Burning Bridges: Zen and the Art of Judge Adaptation” (which should be required reading for every novice who wishes to advance further in debate).

Unfortunately for the debater, these classifications are often vague, imprecise and often modified in their application within the brain of an individual judge. In my own case, I go at some length before the round to explain my own paradigm, which came from my policy coach and his adaptation to running aggressive college debate style in the 1980s while also seeking to survive in a highly conservative Nebraska debate environment:

Jamie’s Paradigm: Stock issues (solvency, harms, inherency, topicality, significance) give me jurisdiction to consider an affirmative ballot, and must subsequently be held in a minimal but viable threshold in order for that ballot to be considered. After that threshold is met, I am a tab judge who enjoys Kritik debate, considers Counterplans as valuable Negative offense which must not be ignored by an Affirmative, and is receptive to theory arguments that are well made, substantiated and sustained. I seek to avoid intervention other than the selection and application of my paradigm within the round, which I have disclosed, unless I am forced to do so. Subsequently, weak abuse arguments demanding I discard my paradigm and intervene, trivial T and conditional arguments lacking sufficient explanation for their contradiction are unlikely to find support in my model. Finally, be aware that debate is a communication sport, and proportionality of offense is critical to your success. Do not count on me assuming that your mentioning of a word once in an overview will take out your opponent’s 3 minute, 7-contention attack on your position. Treat it trivially and my flow might do so as well. Make tags clean and understandable, and they go on my flow. Unlike the other team, I’m not up there reviewing your evidence as you speak. And above all, remember that judge adaptation is critical in your seeking a favorable ballot as I’m the one who makes that decision.

Indeed, it doesn’t sound like the “Common Paradigm Name Model” (Tab, Stock, Policymaker, etc.) really helps us attain that criterion of Educational Experience Quality (EEQ). Another approach toward paradigm disclosure worth exploring is that of the “Open Paradigm” which many college debaters who judge high school debate declare themselves to be. Indeed, in yesterday’s quarterfinals round I co-judged, the other two judges declared themselves to be “more new school who are open to anything.” If we are conjecturing that a judging paradigm exists and that it is relevant in both the determination of the ballot and the experience of EEQ, the news that a judge is “anything” is probably not terribly helpful for us. Fortunately, the ballots we see as coaches and the disclosure of critiques after the round tends to suggest that there was a paradigm after all, though it’s not terribly helpful to discover there was one after the round is already over and judged. Already, we’re seeing some indications that the second category of concern in the conjecture, specifically the communication of the paradigm, seems to be tripping us up.

Categorical Standards for Policy Debate

In the world of governance, risk and compliance (GRC), professional practitioners are often required to work with standards frameworks that deconstruct the quality of a program or process into control categories, each composed of many discrete individual controls. For instance, the information security world often relies on a standard known as ISO 27002 which provides us guidance on what areas we need to evaluate as managers and analysts. ISO 27002 divides the world of information security into twelve domains, which include such categories as Access Controls, Human Resource Management, Business Continuity Planning and Security Policy. Within each domain, numerous specific “controls” are defined, which represent the attributes that make up the domain and help us identify if a business we’re looking at has good quality or not in its information security practice. An example of a specific control would be Password Complexity, where passwords are expected to have a sufficient length, complexity, frequency of change and so on.

This approach to finding Quality is very much a bottom-up one, instead of the top-down model debaters encounter in judge paradigm disclosures. Indeed, the closest ISO 27002 finds to something top-down is in the 12 domains where we’ve grouped controls together that have a common benefit. Other than this grouping, information security is too diverse of a practice to give a single label to that accurately sums up the Paradigm defining the overall Quality of an institution’s security.

Can this approach help us in the identification of a more accurate Policy Judging Paradigm that could be useful in our evaluation of application, communication and judging error? Perhaps we can examine some candidate domains that policy debate might have, comparable to the information security domains detailed previously.

Debate Paradigm Domain Measurement
Furthermore, we might want to also consider how these domains will be measured. Unlike the information security controls where you either have or don’t have a control in place (e.g. you are or are not compliant with the password complexity control), debate is fuzzier. Borrowing from behavioral economics theory, I’d argue that we have some prospect theory approximation going on in this debate paradigm situation. According to the theory, we humans tend to approximate things rather than quantify them precisely. Simply put, people tend to resolve probability down to about five types of chances:

1. Certain
2. Likely
3. Indifferent
4. Unlikely
5. Impossible

Kahneman and Tversky, the brilliant minds behind prospect theory, discovered that people tend to round probabilities up or down into these five groups. If you’ve got a 95% chance of winning the lottery, you’re already out spending the prize. If you have a 1% chance of getting killed by lightning, you’re immune from harm and are already headed to the golf course. This perception of probability is so significant that the risk management architecture I developed for First Data Corporation had to take this into account. Let’s call this concept the “Likelihood Factor” in our bottom-up domain-centric debate paradigm.

Policy Debate Domain Paradigm (PDDP) — A First Stab
So what does our first stab at a bottom-up paradigm look like? Let’s take these two concepts, domains and likelihoods, and put them together in a matrix to see how it would look, where each judge would detail the likelihood of their consideration of a debate domain in their judging paradigm. I’ll take a set of commonly known categories taught in policy debate at that category level and represent them here with my own scores:

Policy Debate Domain Paradigm Scorecard
Judge: Jamie Saker
Instructions: Please indicate the likelihood that you would consider and determine a ballot on the following categories, where certain means the category is mandatory, likely indicates you find this area to be very important in your paradigm, indifferent means you may or may not consider it significant and are likely to leave that determination solely to the debaters to explain, unlikely indicates you are usually not inclined to consider it except in rare circumstances, and impossible means you do not find this to be grounds that you would give a ballot on.

  • Stock Issues: Certain
  • Disadvantages: Likely
  • Topicality*: Certain
  • Untopical Counterplan: Likely
  • Topical Counterplan: Indifferent
  • Kritik: Indifferent
  • Conditional Negative Positions: Unlikely
  • Debate Theory: Indifferent
  • Abuse Theory: Unlikely
  • Specification: Indifferent
  • Performance: Indifferent
  • Accept Speed: Likely
  • Status Quo Presumption: Certain
  • Judge Intervention: Unlikely
  • Poetry**: Impossible

    Paradigm Comments: I expect minimal stock to be sustained by an Aff. Speed evidence is fine but if you speed tags to where I can’t understand them, they’re not likely to be flowed at your peril. Intervention occurs only when I’m forced to, often at the disadvantage of an Aff.

  • Interpreting our PDP Scorecard
    First of all, I should note that the categories represented in the first stab should not be representative of what I would advocate should be the domains of policy debate. Indeed, I threw topicality in there as a discrete area when many of us consider it part of Stock issues, reflecting the fact that too many judges handle it differently than the rest of the affirmative burdens. Indeed, many judges are polarized on topicality. I also threw poetry in there for fun, but also to point out that Performance debate may not be a reasonable descriptor of an autonomous domain, given that I am tolerant of performance like narratives, story telling and such but personally find Poetry reading in policy debate to be abusive… of this particular judge. Secondly, I should note that this approach is occasionally taken in some state debate tournaments, such as the Florida State Debate and nationals competition. However, this use tends to be both ad hoc in construction and limited in its review immediately prior to the engagement of a round.

    Those disclaimers aside, what do we have to examine from our scorecard, and does it help us with our pursuit of EEQ (educational experience quality)? At this point, I’m going to pause before carrying this examination further into the assessment of how judging paradigms influence the three areas of examination for error. I hope the reader has learned so far that I hope to shake out error in this examination through both the application of the traditional paradigm disclosure model as well as a scorecard one, helping us get closer to discovering where that conjecture of paradigm error might be present. Comments and emails (to jsaker at fmtabor.k12.ia.us ) are very welcome.

    Advertisements

    One response to “Judging Quality & The Survival of CX (Part 2)

    1. Pingback: Judging Quality & The Survival of CX (Part 1) « Fuzzy Numbers

    Leave a Reply

    Fill in your details below or click an icon to log in:

    WordPress.com Logo

    You are commenting using your WordPress.com account. Log Out / Change )

    Twitter picture

    You are commenting using your Twitter account. Log Out / Change )

    Facebook photo

    You are commenting using your Facebook account. Log Out / Change )

    Google+ photo

    You are commenting using your Google+ account. Log Out / Change )

    Connecting to %s