Embedding HCD in National Disability Services

Institutionalising design and evidence aligns policy and delivery at scale.

1. Context

Between 2017–2019, the NDIA established a human‑centred design Centre of Excellence to improve service quality for ~450,000 NDIS participants. The team unified feedback (social listening, complaints, interviews) into reform workflows and paired design with financial scrutiny, reportedly halting a flawed rollout and avoiding significant costs. The work surfaced systemic gaps common to federated social programmes: fragmented feedback and delivery pipelines, weak links between outcomes and funding decisions, misaligned policy signals across jurisdictions, trust deficits, and limited standards for service design and measurement.

2. Observations

Governance: An HCD Centre of Excellence created a focal point to align policy and operations, standardise practices, and gate major changes through cross‑functional review. Embedding design capability inside governance reduced policy–delivery drift and enabled earlier challenge to risky rollouts, but mandate and adoption varied across programmes and jurisdictions.

Measurement: Participant outcomes were not consistently tied to operational or financial decisions. Consolidating complaints, social listening, and research improved signal quality, yet the absence of a minimum “evidence pack” (definitions, measures, traceability from insight to change) limited comparability and slowed learning across units.

Accountability: Financial assessment alongside design testing increased discipline, reportedly preventing a costly implementation. Integrating participant voice into decision forums strengthened accountability, but historic complaints and inconsistent experiences indicated legitimacy gaps that require transparent feedback‑to‑decision pipelines and visible redress.

Scalability: Point fixes without shared standards risk duplication. Playbooks, templates, and shared taxonomies can spread good practice, but cross‑programme variation and state–federal interfaces complicate diffusion. Without a common service‑design standard and outcome framework, capabilities remain siloed and benefits stall at pilot scale.

3. Research Considerations

The case reveals a missing layer of coordination infrastructure: a common evidence standard linking outcomes to spend, governance that pairs design with financial scrutiny, and unified feedback pipelines that travel across jurisdictions. Institutionalising these components can reduce costly rollouts and build trust.

  • What minimum evidence standard should link participant outcomes to service and funding changes?
  • How can federated governance align policy signals while protecting local responsiveness?
  • Which shared artifacts (design standards, outcome taxonomies, reporting templates) most efficiently scale capability across programmes?

Leave a Reply

Your email address will not be published. Required fields are marked *