Impact Assessment Study

Textbooks, EdTech & Teaching–Learning

Abstract

We design and run impact assessments that tell you what’s working, for whom, and at what cost. For publishers and EdTech providers, we measure the effect of textbooks, apps, and devices on student learning outcomes and the teaching–learning process—combining rigorous designs (pre–post, quasi-experimental, RCT where feasible) with classroom observations, usage analytics, and cost-effectiveness.

Who this is for

  • Publishing houses: subject/series effectiveness, adoption quality, teacher use
  • EdTech companies: apps/tablets/LMS impact on learning and pedagogy
  • School systems & groups: evidence for scale-up and procurement decisions
  • Foundations/CSR & investors: outcomes verification and value for money

Decisions this study answers

  • Learning impact: How much did outcomes improve (effect size, pass rates)?
  • Pedagogical impact: What changed in teaching time-on-task, questioning, feedback?
  • Implementation fidelity: Was the program used as intended (dosage, quality, reach)?
  • For whom it works: Grade/subject, gender, SEC, urban/rural, newcomer/struggling cohorts
  • Cost-effectiveness: Cost per 0.1 SD gain / per additional student reaching proficiency
  • Scale/stop/tweak: What to scale, what to fix, what to sunset

Study designs we use (fit to context)

  • Pre–post with comparison (difference-in-differences)
  • Matched comparison (propensity score / exact matching)
  • Cluster RCT (where feasible and ethical)
  • Mixed-methods (quant + qual) for a complete picture

We’ll recommend the minimum viable rigor that answers your question within your budget and timelines.

What we measure (education-specific KPIs)

A) Student Learning Outcomes
  • Standardised tests / curriculum-aligned assessments (baseline → endline; optional midline)
  • Effect size (Cohen’s d), gain scores, proficiency bands, subgroup impacts
B) Teaching–Learning Process
  • Classroom observations (structured rubrics): time-on-task, questioning, formative assessment, differentiation
  • Teacher knowledge/attitudes/practices (TKAP) surveys; PD/training logs
C) Implementation & Usage
  • EdTech telemetry: active days, session length, completion, feature use
  • Textbook usage: lesson coverage, activity completion, homework adherence
  • School/class/teacher reach, dosage, and adherence
D) Cost & Feasibility
  • Cost per school/teacher/student; training & support effort
  • Cost-effectiveness benchmarks (₹ per 0.1 SD gain / per proficiency lift)

Scope of work (what we do)

  1. Scoping & Logic Model — Problem framing, outcomes map, risks/assumptions, sampling plan.
  2. Instruments & Protocols — Tests (curriculum-aligned), observation rubrics, surveys/interview guides, usage data specs; consent/assent templates.
  3. Sampling & Power — Sample size calculations, cluster design (class/school), randomisation/matching plan.
  4. Fieldwork & Quality Control — Enumerator training, piloting, monitoring, data validation, bias safeguards.
  5. Analysis & Triangulation — Pre-registered analysis plan; DiD/ANCOVA; multilevel (class/school) models; subgroup & sensitivity checks.
  6. Reporting & Enablement — Technical report, board-facing exec summary, district/school feedback notes, and a dashboard for ongoing tracking.
  7. Scale-up & Product Feedback — Recommendations to improve content/app flows, teacher PD focus, and implementation playbooks.

Methodology (how we keep it rigorous & practical)

  • Pre-registration of outcomes where appropriate
  • Handling missing data with transparent rules (MCAR/MAR checks)
  • Clustering & design effects accounted for in SEs/CIs
  • Heterogeneity analyses (grade, gender, SEC, usage bands)
  • Validity & reliability: item analysis, inter-rater checks for observations
  • Ethics & consent: child-safe protocols; data minimisation; privacy-by-design
  • Attribution clarity: limit co-interventions; document context shocks

Deliverables

  • Study Protocol (design, instruments, sampling, analysis plan)
  • Data Collection Pack (tests, rubrics, surveys, consent)
  • Clean Datasets + Codebooks; analysis code/notebooks (on request)
  • Impact Report (technical + executive) with effect sizes & CIs
  • Implementation & Cost-effectiveness Brief
  • Dashboard (spreadsheet or web) for ongoing monitoring
  • Scale-Up Recommendations (product/content/PD/ops)

Indicative timeline

  • Weeks 0–2: Scoping, instruments, piloting, enumerator training
  • Weeks 3–6: Baseline + program launch
  • Weeks 7–12: Monitoring, midline (optional), support diagnostics
  • Weeks 13–16: Endline, analysis, reporting & read-outs

(Timelines flex by term/calendar and access.)

Example study variants (based on your past work)

  • Textbook/Series Impact (Publisher): Baseline/endline tests in target subjects; fidelity via teacher logs & observations; subgroup analysis by grade/SEC; cost per proficiency gain.
  • EdTech Impact (Apps/Tablets): Usage telemetry + pre–post tests; teaching-practice changes via rubrics; time-on-task and homework adherence; cohort outcomes by usage bands.

FAQs

We run a power analysis up front; typical clusters are 20–40 schools (or classes) depending on desired detectable effect and variance.

Yes—if we can set a clean baseline and find a reasonable comparison (matched schools/classes or stepped-wedge rollout).

Minimal—we reuse existing logs where possible and keep instruments short; observations are scheduled and non-disruptive.

Yes—with guardrails. We produce a rigorous report and board-facing summary; public claims should reflect effect sizes and context.

We provide consent/assent templates, anonymise data, and follow agreed retention policies. Formal IRB/IEC approvals are coordinated where required.

Need credible evidence on the impact of your textbooks or EdTech?

Let’s scope your Impact Assessment Study.

Write to aurobindo@raysolute.com or call +91-9891321279.