Tradition as Algorithm 🗃️📜💻#
“If it worked once, it must be right forever.”
“The most powerful algorithm is the one nobody remembers writing.”
In many systems, what looks like rational design is actually historical habit—solidified, digitized, and disguised as optimization.
The logic is simple:
“This is how we’ve always done it.”
becomes
“This is the correct way to do it.”
This is how tradition becomes code.
🧠 Examples Across Domains#
Medicine: Risk calculators still based on decades-old cohorts, despite shifting demographics
Education: Standardized testing norms built around mid-century cognitive models
Law: Sentencing algorithms trained on historically biased conviction data
Hiring: Resume screeners that reward keywords rooted in elite academic pedigree
These are not just outdated. They are encoded tradition—congealed in math, not myth.
🔄 Tradition Reinforced by Interface#
Interfaces often hide the source of their logic. Consider:
Drop-down menus with pre-defined “acceptable” values
Default thresholds marked in green
Risk scores labeled “low” without user-defined baselines
Output visualizations with no reference cohort or confidence interval
What looks clean is often curated inheritance.
🧪 The Illusion of Axioms#
Traditional systems masquerade as axioms:
30 ml/min eGFR is always bad
4.5% perioperative mortality is unacceptable
Overweight is BMI > 25, full stop
But context matters. Subgroup differences matter.
The problem isn’t the rule—it’s the failure to question it.
🌍 Cultural Inheritance in Code#
Tradition is also cultural.
What counts as “success”?
What defines “risk worth acting on”?
Whose outcomes are prioritized in model optimization?
Many risk calculators are trained on Western, insured, white populations—and then exported as universal.
The algorithm is neutral only if the culture that birthed it is.
🛠 Ukubona’s Refusal#
Ukubona refuses inherited settings without interrogation:
All thresholds are transparent, editable
Subgroup sample sizes are shown beside estimates
Interface labels favor interpretive clarity, not defaulted bias
The user sees assumptions, not just output
Our models declare their ancestry.
They wear their assumptions visibly.
📘 Why This Matters#
Because in a world where risk models drive:
Access to care
Access to funding
Access to dignity
…we cannot afford to pretend the algorithm is new just because it’s digital.
Next: Normativity by Default – When the Average Becomes the Ideal