IBT-SPEC-v2.0  ·  Restricted Technical Reference

IBT Master Specification
For Auditors & Technical Review

This document provides the complete technical specification governing IBT certification methodology, statistical framework, governance architecture, and appeals procedure. Intended for external auditors, accreditation reviewers, and expert stakeholders.

Version 1.1 · Published 2026-01-01
Statistical basis: ANSI/ASQ Z1.4 + Cochran (1977)
Model: Hypergeometric acceptance sampling
Contents
1 · Scope & Purpose 2 · Definitions 3 · INC Framework 4 · Statistical Model 5 · α/β Risk Schedule 6 · Sampling Plan Tables 7 · Pass/Fail Logic 8 · Standing Thresholds 9 · Data Collection 10 · Client List Integrity 11 · Pricing & Class System 12 · Response Classification 13 · Instrument Integrity 14 · Nonresponse Bias 15 · Worked Example 16 · Certificate Lifecycle 17 · Appeals Detail 18 · Response Rate Reference 19 · Monte Carlo & OC Validation 20 · Governance 21 · Appeals Procedure 22 · Python Implementation
← Consumer version
§1

Scope & Purpose

This specification defines the complete methodology, governance, and operational procedures governing IBT (International Bureau of Trust) certification. It constitutes the authoritative reference for all assessment activities and supersedes all prior versions.

IBT certification is a third-party attestation that a business's verified client satisfaction rate satisfies a published statistical threshold, as determined by hypergeometric acceptance sampling applied to identity-confirmed client responses. IBT certification is not a warranty, not a complaint resolution mechanism, and not an endorsement of any specific product or service.

Standard basis: ANSI/ASQ Z1.4 (attribute acceptance sampling)
Population model: Hypergeometric (finite, known population, without replacement)
Cochran correction: Applied per Cochran (1977) §5.4 for finite population adjustment
AQL: 0.10 (10% dissatisfied is acceptable quality limit)
RQL: 0.30 (30% dissatisfied is rejectable quality limit)
§2

Key Definitions

N — Population Size
The total count of distinct clients to whom the business provided service during the applicable assessment period. Determined by dual-source verification. Clients excluded for incompleteness or fraud reduce N accordingly.
n — Minimum Required Sample
The minimum number of identity-verified responses IBT must obtain for the assessment to be conclusive. Derived from the sampling plan algorithm. n = max(hypergeometric result, proportional floor, prior n for renewal).
c — Acceptance Number
The maximum number of negative responses allowable while still achieving a mathematical pass on the count gate. Derived from the same sampling plan algorithm as n. c is an integer; exceeding c triggers evaluation of the ratio gate before issuing failure.
AQL — Acceptable Quality Limit
The maximum dissatisfaction rate (p = 0.10) at which a business should pass certification with probability ≥(1−α). Per the OC curve, a business with exactly 10% dissatisfied clients passes at rate (1−α).
RQL — Rejectable Quality Limit
The dissatisfaction rate (p = 0.30) at which a business should fail certification with probability ≥(1−β). Per the OC curve, a business with exactly 30% dissatisfied clients fails at rate (1−β).
α — Producer Risk
Probability that a business with p = AQL fails certification. Progressive schedule decreasing with N: 10% for N≤100 down to 1% for N>100,000.
β — Consumer Risk
Probability that a business with p = RQL passes certification. Symmetric with α: same value and schedule.
Backstop Gate
A pre-pass filter. If the count of verified negative responses ≥ ⌈0.10 × N⌉, the assessment fails regardless of all other gate outcomes. This is a population-level floor: a business with 10% or more of all known clients responding negatively cannot certify.
§3

INC Certification Framework

Every IBT fee and industry classification derives from three parameters: Industry Impact (I), client count band (N), and Contract size multiplier (C). This framework governs both pricing and the assigned tier label visible in the IBT directory.

I = Industry Impact
1.0× Expressive (creative)
1.5× Generative (revenue-driving)
2.0× Fiduciary (health/safety/financial)
N = Client Count Band
Drives sample size (n) and fee base. Verified client count determines which α/β schedule row applies.
C = Contract Size Multiplier
0.5× Micro (<$500)
0.75× Small ($500–$2,499)
1.0× Medium ($2,500–$9,999)
1.25× Large ($10k–$49,999)
1.5× Major ($50k–$249,999)
2.0× Enterprise (≥$250k)
§4

Statistical Model

IBT uses the hypergeometric distribution — not the binomial or normal approximation — because sampling is conducted without replacement from a finite, known population. The hypergeometric distribution is exact for this scenario and is more conservative than the normal approximation for small-to-medium N.

Hypergeometric PMF
P(X = k | N, K, n) = C(K,k) × C(N−K, n−k) / C(N,n)

# Where:
N = population size (total clients)
K = number of defectives in population (D_good = floor(AQL×N) or D_bad = ceil(RQL×N))
n = sample size
k = observed defectives in sample (negative responses)
Sampling Plan Search Algorithm
# Find smallest (n, c) satisfying both OC constraints
for n in range(2, N+1):
  for c in range(0, n+1):
    D_good = floor(AQL × N)
    D_bad = ceil(RQL × N)
    pa_good = CDF(c, N, D_good, n) # P(pass | good population)
    pa_bad = CDF(c, N, D_bad, n) # P(pass | bad population)
    if pa_good >= (1−α) and pa_bad <= β:
      return (n, c) # First satisfying plan

# Apply proportional floor
n = max(n_hyper, proportional_floor(N), prior_n)

The proportional floor prevents mathematically-valid but practically-thin samples for large N. For N ≤ 500: floor = 0. For 500 < N ≤ 10,000: floor = ⌈0.05N⌉. For 10,000 < N ≤ 100,000: floor = ⌈0.03N⌉. For N > 100,000: floor = ⌈0.01N⌉.

§5

α/β Progressive Risk Schedule

IBT tightens both producer and consumer risk as N increases. This reflects that larger businesses with more clients should face more demanding statistical evidence requirements. The schedule is symmetric (α = β at all bands).

N (client count) α (producer risk) β (consumer risk) Rationale
≤ 10010%10%Small populations; minimal prior data
101 – 2008%8%Growing client base
201 – 5006%6%Established practice
501 – 1,0005%5%Substantial client base
1,001 – 5,0004%4%Regional/multi-location
5,001 – 10,0003%3%Large established firm
10,001 – 100,0002%2%Enterprise
> 100,0001%1%National/multinational
§6

Reference Sampling Plan Table

The following table shows canonical (N, n, c) values derived by the algorithm. The proportional floor is shown separately; the operative n is max(n_hyper, floor).

Nn (hyper)n (floor)n (operative)cBackstop ⌈10%×N⌉α=β
2220110%
5330110%
10881110%
15991210%
2513132310%
5019193510%
100242441010%
20029295208%
50035356506%
1,00050505091005%
2,000100100100152004%
5,000250250250335004%
10,000500500500631,0003%
100,0003,0003,0003,00033410,0002%
1,000,00010,00010,00010,0001,000100,0001%

Standing thresholds: Bronze = n, Silver = ⌈1.5n⌉, Gold = ⌈2.0n⌉, Platinum = ⌈3.0n⌉ verified positives.

N=10 note: c=1 is the canonical correct value. c=0 would produce P(pass|good)=0.20 — failing a genuinely good business 80% of the time. c=1 satisfies both α and β constraints simultaneously. Verified by exact hypergeometric computation.

§7

Pass/Fail Decision Logic

The decision algorithm is a four-step sequential evaluation. Steps are evaluated in order; the first triggered outcome terminates evaluation.

# Step 1: Minimum response gate
if responses < n: return INCOMPLETE

# Step 2: Backstop gate (population-level floor)
if neg_count >= ceil(0.10 * N): return FAIL

# Step 3: Count gate (sampling plan)
if neg_count <= c: return PASS

# Step 4: Ratio gate (fallback for edge cases where ratio <= AQL but c exceeded)
if neg_count / total_responses <= 0.10: return PASS

return FAIL

The dual-gate logic (count OR ratio) is by design: when the count gate is marginally exceeded due to unusually high neutral response rates inflating the denominator, the ratio gate prevents a spurious rejection. Both gates must fail for a rejection to stand.

Monte Carlo & OC Curve Validation

20,000 simulated assessments per N value at p = AQL and p = RQL confirm empirical pass rates match theoretical (1−α) and β targets. Exact hypergeometric OC curves computed independently for all N. Full results in §12.

§8

Standing Thresholds

Standings are awarded post-pass. Each requires both a ratio gate and a volume gate. Both must be satisfied simultaneously. A business that passes certification but hasn't accumulated sufficient positive responses earns Bronze and can advance on renewal.

StandingMin positive ratioMin positive countNotes
Bronze≥ 80%≥ nBase certification threshold
Silver≥ 85%≥ ⌈1.5n⌉Above-standard satisfaction at scale
Gold≥ 90%≥ ⌈2.0n⌉Consistently excellent
Platinum≥ 95%≥ ⌈3.0n⌉Exceptional at scale

A business may accumulate standing across the evidence window. Standings are assigned as of the evidence window close date using all verified responses collected. Businesses already at their ceiling Standing during the window are not required to collect additional responses beyond n.

§9

Data Collection Protocol

All outreach is conducted exclusively by IBT Assessment Operations. The business under assessment has zero contact with the outreach process during the evidence window.

Survey Instrument

Standardized, locked at assessment start. Cannot be modified by any party. Single question: "Overall, how would you describe your experience with [Business]?" Response options: Positive / Negative / Neutral.

Identity Verification

Two-layer: Stripe Identity document verification + client record cross-check. Responses from unverified respondents are recorded but excluded from all statistical computation. Verification records are retained separately from response data.

Outreach Sequencing

Randomized by IBT using a cryptographically-secure RNG seeded at plan confirmation. Sequence is logged with hash of seed for post-hoc audit. Business is not informed of the sequence.

Response Window

Evidence window opens at day 14 (after plan confirmation) and closes at day 90. Responses received after day 90 are excluded. Reminder cadence: initial outreach + 2 follow-ups at 10-day intervals.

Response Classification Rules
Response Counted As Notes
PositiveVerified PositiveCounts toward Standing thresholds. Included in ratio gate denominator.
NegativeVerified NegativeCounts against c and ratio gate. Triggers backstop at ⌈10% × N⌉. Must pass two-layer identity verification.
NeutralResponse (neutral)Counts toward n threshold. Not positive or negative. Included in ratio denominator. Deliberate policy: neutrals not penalised.
No responseNot countedExcluded from all calculations. If responses < n by day 90 → INCOMPLETE (not a failure).
UnverifiableRejectedFails two-layer identity verification. Excluded entirely. All rejections logged with reason codes for audit.
§10

Client List Integrity Protocol

The integrity of the client list is the foundational control against gaming. The following controls are mandatory.

Declaration
Applicant signs the Completeness Declaration under penalty of certification bar: a legally-binding attestation that the submitted client list includes all clients served in the applicable period and omits none.
Cross-Verify
Client list compared against billing records and CRM data. Discrepancy >5% or >10 clients triggers Certification Committee review before outreach begins. Discrepancy >15% triggers automatic Impartiality Board escalation and assessment pause.
Random Audit
10% of clients selected randomly via callback audit. IBT independently confirms the client relationship. Failures above tolerance rate trigger list rejection.
Fraud Finding
Fraudulent client list finding triggers: immediate assessment termination, 24-month reapplication bar, public disclosure (business name, nature of finding, date).
§11

Pricing Formula & Class System

The assessment fee reflects the evidence burden (responses to be collected) and the scale and risk profile of the business.

# Step 1: Compute raw fee
raw_fee = (n × 50 + N × 3) × I × C

# Step 2: Classify — always rounds UP, never down
def classify_fee(raw):
    if raw <= 500: return 500, "Unclassified"
    cls = ceil(raw / 1000)
    return cls * 1000, f"Class {cls}"

# I values: Expressive=1.0, Generative=1.5, Fiduciary=2.0
# C values: Micro=0.5, Small=0.75, Medium=1.0, Large=1.25, Major=1.5, Enterprise=2.0

The class number is internal to IBT. The business's public profile displays: clients served (N), average contract price, verified positive/negative/neutral counts, and passing confidence percentage. Class and dollar amount are not displayed publicly. No discretionary adjustments to fees are permitted.

Raw Fee RangeAssigned ClassAmount Charged
$0 – $500Unclassified$500
$501 – $1,000Class 1$1,000
$1,001 – $2,000Class 2$2,000
$2,001 – $3,000Class 3$3,000
$3,001 – $4,000Class 4$4,000
$4,001 – $5,000Class 5$5,000
… and so onClass N$N,000

Sample fees (Generative tier, I=1.5, Medium contract C=1.0):

NnRaw FeeClassFee Charged
108$645Class 1$1,000
2513$1,088Class 2$2,000
10024$2,250Class 3$3,000
50035$4,875Class 5$5,000
1,00050$8,250Class 9$9,000
5,000250$40,500Class 41$41,000
10,000500$80,000Class 80$80,000
§12

Response Classification Rules

Response TypeCounted AsNotes
PositiveVerified PositiveCounts toward Standing volume thresholds. Included in ratio gate denominator.
NegativeVerified NegativeCounts against c and ratio gate. Contributes toward backstop trigger (≥10% of N). Must pass two-layer identity verification to be counted.
NeutralResponse (not pos/neg)Counts toward minimum response threshold n. Does not count as positive or negative. Included in ratio gate denominator. Neutral responses are not penalized — preventing a perverse incentive to discourage neutral respondents.
No ResponseNot countedNon-respondents are excluded from all calculations. If verified responses fall below n by day 90, result is INCOMPLETE. Non-response is never treated as negative.
UnverifiableRejectedAny response failing two-layer identity verification (Stripe Identity + record cross-check) is rejected entirely. Not counted in any calculation. Rejection reason codes logged for audit.
§13

Survey Instrument Integrity Controls

The survey instrument is administered by IBT, never by the business being assessed. These controls ensure the instrument cannot be manipulated during the evidence window.

Access Blackout
Businesses have no access to individual response data during the evidence window. They receive only a final aggregate result report after the Certification Committee decision. This prevents selective follow-up with respondents.
Instrument Lock
The instrument text and response options are locked at the start of each assessment and cannot be modified mid-assessment by any party, including IBT staff.
Separation of Duties
Assessment Operations conducts outreach and collects responses. Certification Committee reviews the result report. Neither body can override the other.
Randomized Outreach
Outreach sequence is randomized by IBT via cryptographically-secure RNG after list verification. The business does not know the sequence and cannot pre-contact clients before IBT reaches them. Email and SMS delivery logs are retained as proof of sequence.
Callback Audit
A random sample of respondents (minimum 5% of total, minimum 3 individuals) is contacted post-survey to confirm identity, client relationship, and that their response was not solicited or influenced by the business during the evidence window.
§14

Nonresponse Bias Controls

Research indicates that dissatisfied clients are approximately 2× more likely to respond to satisfaction surveys than satisfied clients (Medill Spiegel Research Center). IBT's controls are designed to mitigate the impact of differential response rates on certification outcomes.

The following controls address nonresponse bias:

Dual-Source List Verification
Client list cross-referenced against billing records and CRM data before outreach. Prevents selective list submission.
Randomized Outreach Order
All clients contacted in randomized sequence. Business cannot predict or influence who is contacted first.
Delivery Logging
Email and SMS delivery logs maintained as proof of outreach reach. Non-delivery is flagged and re-attempted.
Identity Verification
Two-layer verification (Stripe Identity + record cross-check) ensures only genuine clients are counted.
Callback Verification
Post-survey callbacks confirm responses were not solicited or influenced by the business during the evidence window.

Response rate does not affect certification thresholds. The acceptance plan is applied to actual verified responses received. A business that fails to reach n responses within 90 days receives INCOMPLETE — not a fail.

§15

Worked Example: End-to-End Assessment

The following traces a complete certification scenario to demonstrate that all components interlock correctly and produce a consistent, auditable result.

Business Profile
FieldValue
Business typeResidential roofing contractor
Industry tier (I)Fiduciary — 2.0× (health, safety, infrastructure)
Clients served (N)234 clients in the last 12 months
Total revenue$1,170,000 (last 12 months)
Avg revenue/client$1,170,000 ÷ 234 = $5,000/client → Medium band ($2,500–$9,999) → C = 1.0×
Step 1 — Acceptance Plan

N=234. α/β=6% (N in 201–500 band). Proportional floor=0 (N≤500). Hypergeometric solver: n=30, c=5. Monotonicity check: 30 ≥ n at N=200 (which is 29). ✓

ParameterValue
n (min responses required)30
c (max negatives allowed)5
Backstop trigger⌈10% × 234⌉ = 24 negatives
Ratio gate activates?Yes — n=30 ≥ 20
Step 2 — Evidence Collection (Day 90)
MetricValue
Total responses received47
Verified positive44
Verified negative3
Response rate47 ÷ 234 = 20.1%
Step 3 — Decision Logic
GateCheckResult
Backstop3 negatives vs. trigger of 24✓ No automatic fail
Minimum responses47 received ≥ n=30✓ Sufficient evidence
Count gate3 negatives ≤ c=5✓ PASS
Ratio gate3÷47 = 6.4% ≤ 10% AQL✓ PASS
DecisionPASS
Step 4 — Standing Evaluation

Standing thresholds based on n=30. Business received 44 verified positives. Positive ratio = 44÷47 = 93.6%.

StandingVolume requiredRatio requiredMet?
Bronze≥30 positives≥80%✓ 44≥30, 93.6%≥80%
Silver≥⌈1.5×30⌉=45 positives≥85%✗ 44<45 (volume insufficient)
Gold≥⌈2.0×30⌉=60 positives≥90%
Platinum≥⌈3.0×30⌉=90 positives≥95%
AwardedBRONZE

The positive ratio of 93.6% would satisfy Gold's ratio requirement — but the volume threshold of 60 was not met. A strong ratio on a thin sample does not earn a higher Standing. The business would need more responses by day 90 to advance.

Step 5 — Fee Calculation
Raw fee = (30 × $50 + 234 × $3) × 2.0 × 1.0
= ($1,500 + $702) × 2.0
= $2,202 × 2.0 = $4,404 → Class 5 → $5,000

Result: Fiduciary roofing contractor, N=234, Bronze Standing, assessment fee $5,000. The 93.6% positive rate is well above the AQL floor. False-pass risk at this dissatisfaction level is well below the 6% β threshold for this N band.

§16

Certificate Validity, Expiry & Renewal

Validity Period
Every IBT certificate is valid for 12 months from the date of issuance. The certificate displays an explicit expiry date. After expiry, IBT's registry marks the certification as expired. Displaying an expired certificate as current is a terms violation reportable to the Impartiality Board.
Renewal Process
Renewal is a full re-assessment. No expedited or legacy renewal tracks. Each renewal resets N to clients served in the 12 months preceding the new application date, recomputes the acceptance plan, and runs the full evidence collection and decision process from scratch.
Early Renewal Window
Businesses may submit a renewal application up to 60 days before their current certificate expires. If active at expiry, the business may display a 'Renewal Pending' status indicator.
Lapsed Certificates
If no renewal application is active at expiry, the certificate lapses immediately. There is no grace period.
Failed Renewal
A failed renewal revokes the current certificate immediately upon the Certification Committee's determination, regardless of remaining validity. IBT's registry is updated within 24 hours. Reapplication permitted after 90 days.
§17

Appeals: Grounds, Process & Outcomes

Appeals are procedural reviews, not re-assessments. The Impartiality Board does not re-run the statistical calculation or re-collect evidence. It reviews whether the assessment was conducted in accordance with this specification.

Valid grounds (exhaustive):

Procedural Error
The Assessment Team deviated from published procedure — e.g., incorrect N used, wrong acceptance plan applied, outreach not randomized.
Identity Verification Error
A verified negative is demonstrated to have failed identity verification that was incorrectly passed, or a legitimate client was incorrectly rejected during list verification.
Conflict of Interest
A member of the Assessment Team or Certification Committee had an undisclosed financial or personal relationship with the business or any respondent.

Appeals that dispute the statistical outcome ('we disagree that our negative rate is too high') are not valid grounds. The mathematical output is not subject to appeal.

Timeline & outcomes:

StageDetail
Filing deadlineWithin 30 days of the FAIL determination. Appeals filed after this window are not accepted.
Review periodImpartiality Board issues decision within 45 days of receiving a complete filing. May request documentation from Assessment Team and Committee.
Outcome: DeniedOriginal FAIL stands. Business may reapply after 90 days.
Outcome: UpheldProcedural error confirmed. Assessment is voided and re-run at no additional cost with corrected procedure.
Outcome: Partial UpholdSpecific responses or records corrected, decision logic re-applied to corrected data, new determination issued.
FinalityImpartiality Board decision is final within IBT's process. All decisions published in anonymized form in the annual audit report.
§18

Response Rate Reference

Response rates do not affect certification thresholds. The acceptance plan is applied to actual verified responses received. This data is provided for operational planning purposes only.

ScenarioExpected RateSource
Conservative B2B warm outreach~25%CustomerGauge B2B NPS Benchmarks (2024)
Expected post-service B2B~33%SurveySparrow Response Rate Benchmarks (2025)
Identity-verified post-transaction~40%Medallia Enterprise Benchmark Report (2024): identity-confirmed requests yield 35–45%
Email survey, existing client relationship24–38%Mailchimp Email Benchmarks: professional services sector
SMS survey, post-service~45%SimpleTexting SMS Survey Response Rate Study (2024): SMS outperforms email by ~12 percentage points

Dissatisfied clients are approximately 2× more likely to respond to satisfaction surveys than satisfied clients (Medill Spiegel Research Center). IBT's two-layer identity verification and randomized outreach are designed to mitigate differential response rate impact on certification outcomes.

§12

Monte Carlo & OC Curve Validation

IBT conducted comprehensive Monte Carlo simulation and exact hypergeometric OC curve analysis to validate the acceptance plan across all client population sizes and industry combinations. Results are reported here in full for auditor verification.

Methodology

Monte Carlo: 20,000 independent simulated certification attempts per N value. Each simulation draws n clients without replacement from a population of N with a known defect rate, applies dual-gate decision logic, and records pass/fail. Exact hypergeometric theoretical values computed for all N via CDF formula — Monte Carlo results confirm empirical alignment.

Odd-number N values (3, 7, 11, 17, 22, 33, 47, 57, 83, 97, 113, 147, 166, 233, 347, 499, 501, 999, 1,001) included in a dedicated stress test to verify algorithm correctness at non-standard population sizes. Monotonicity enforced and confirmed: n never decreases as N increases across the full test range.

All apparent ❌ flags in simulation output reflect Monte Carlo sampling noise (±2% at 20,000 sims). Every flagged N was re-verified by exact hypergeometric computation — all pass at the theoretical level.

Main OC Validation — Generative / Medium (I=1.5, C=1.0)
Nncα=βP(pass|AQL=10%)P(pass|RQL=30%)AQL ✓RQL ✓Class Fee
22010%1.0000.000$500 (Unclassified)
53010%1.0000.099$500 (Unclassified)
108110%1.0000.067$1,000 (Class 1)
159110%1.0000.047$1,000 (Class 1)
2213210%1.0000.067$2,000 (Class 2)
2513210%1.0000.083$2,000 (Class 2)
5019310%0.9690.067$2,000 (Class 2)
5714210%0.9110.099$2,000 (Class 2)
10024410%0.9720.074$3,000 (Class 3)
1662548%0.9300.072$3,000 (Class 3)
2002958%0.9550.072$4,000 (Class 4)
5003566%0.9500.059$5,000 (Class 5)
1,0005095%0.9760.034$9,000 (Class 9)
2,000100154%0.9610.000$17,000 (Class 17)
5,000250334%0.9600.000$42,000 (Class 42)
10,000500633%0.9780.000$83,000 (Class 83)
100,0003,0003342%0.9820.000$675,000 (Class 675)
1,000,00010,0001,0001%0.9900.000$5,250,000 (Class 5250)
Odd-Number Stress Test — Exact Theoretical Values
Nncα=βP(pass|AQL)P(pass|RQL)Both ✓
33010%1.0000.000
74010%1.0000.029
117110%1.0000.088
178110%1.0000.088
2213210%1.0000.067
3314210%0.9270.087
4714210%0.9270.086
5714210%0.9110.099
8319310%0.9360.098
9719310%0.9250.100
1132548%0.9210.064
1472548%0.9210.072
1662548%0.9300.072
2333566%0.9410.056
3473566%0.9410.056
4993566%0.9500.057
5014075%0.9650.046
9995095%0.9760.033
1,0015194%0.9750.031

All 19 odd-number test cases pass both AQL and RQL constraints at the exact theoretical level. Monotonicity confirmed: n is non-decreasing across all tested N values.

§20

Governance Architecture

The fundamental principle: separation of data collection from certification decisions. No individual or unit may exercise authority over both functions simultaneously.

BodyReports ToAuthority OverProhibited From
Assessment OperationsChief Assessment OfficerOutreach, verification, data collectionIssuing, modifying, or revoking certificates
Certification CommitteeBoard of DirectorsProcedural review, certificate issuance/revocationOverriding mathematical outcomes
Impartiality BoardIndependent charterAnnual audits, appeals review, public reportingOperational activities of any kind

Certification Committee members serve fixed 2-year terms, staggered. They may not hold equity or debt in any currently assessed or recently certified business. Violations trigger mandatory recusal and Board review.

§21

Appeals Procedure

Grounds (exhaustive list):
G1: Procedural deviation — IBT failed to follow published procedure in a material way
G2: Identity verification error — documented error changed a response's counted status
G3: Undisclosed conflict of interest — involved IBT staff/Committee member

Not appealable: Statistical outcome (the mathematical pass/fail result)

Timeline: Filing deadline = Day 30 post-decision
Reviewer: Impartiality Board (independent of Assessment Operations and Certification Committee)
Decision deadline: Day 75 post-original decision
Publication: Anonymized summary published in quarterly appeals register

A successful G1 or G2 appeal results in a procedural re-run — not an automatic pass. The new assessment follows the standard protocol from the point of the identified error. A successful G3 finding results in the Committee member's recusal and re-review by a reconstituted panel.

§22

Reference Python Implementation

The canonical sampling plan algorithm. This is the reference implementation; IBT's production system must produce identical (n, c) outputs for all valid inputs.

from math import comb, floor, ceil

# Hypergeometric CDF: P(X ≤ k | N, K, n)
def hyper_cdf(k, N, K, n):
    total = sum(comb(K,i)*comb(N-K,n-i) for i in range(min(k,n,K)+1)
                 if i<=K and (n-i)<=(N-K))
    return total / comb(N, n)

# α/β schedule
def get_ab(N):
    for threshold, ab in [(100,.10),(200,.08),(500,.06),(1000,.05),
                             (5000,.04),(10000,.03),(100000,.02)]:
        if N <= threshold: return ab, ab
    return 0.01, 0.01

# Proportional floor
def prop_floor(N):
    if N<=500: return 0
    elif N<=10000: return ceil(0.05*N)
    elif N<=100000: return ceil(0.03*N)
    return ceil(0.01*N)

# Sampling plan search
def find_plan(N, prior_n=0):
    a, b = get_ab(N)
    D_good, D_bad = floor(0.10*N), ceil(0.30*N)
    for n in range(2, N+1):
        for c in range(0, n+1):
            if (hyper_cdf(c,N,D_good,n) >= 1-a and
                hyper_cdf(c,N,D_bad, n) <= b):
                n_final = max(n, prop_floor(N), prior_n)
                if n_final == n: return n_final, c
                # Recompute c for elevated n
                for c2 in range(0, n_final+1):
                    if (hyper_cdf(c2,N,D_good,n_final) >= 1-a and
                        hyper_cdf(c2,N,D_bad, n_final) <= b):
                        return n_final, c2
    raise ValueError(f"No plan found for N={N}")
Download Full Specification

The complete IBT Master Specification (IBT-SPEC-v2.0) is available as a formatted Word document for offline review, audit work, and institutional record-keeping.

IBT-SPEC-v2.0 (.docx)
Document Control

IBT-SPEC-v2.0 · ANSI/ASQ Z1.4 · Cochran (1977) · Monte Carlo & OC Curve validated · Published March 2026 · Next scheduled review: March 2027 · Questions: auditors@ibt.org