Contact Us
Evidence & Research

Impact Assessment Study for Textbooks, EdTech & Teaching-Learning

Rigorous research designs to measure what's working, for whom, and at what cost. We combine experimental methods, classroom observations, usage analytics, and cost-effectiveness analysis to inform your scale, stop, or tweak decisions.

Impact Assessment Overview

Impact Assessment Study is a research service by RAYSolute Consultants that measures the causal effect of education interventions on learning outcomes and teaching practices. For publishers, EdTech companies, school systems, and funders, we design and execute rigorous studies using RCTs, quasi-experimental methods (difference-in-differences, propensity score matching), classroom observations, and usage analytics. Our assessments determine effect sizes, implementation fidelity, subgroup impacts, and cost-effectiveness (cost per 0.1 SD gain). RAYSolute is led by Aurobindo Saxena, a Forbes India contributor with 23+ years of education sector experience.

Who This Is For

Organizations We Serve

Publishing Houses

Measure subject/series effectiveness, adoption quality, and teacher usage patterns.

EdTech Companies

Evaluate apps, tablets, and LMS impact on learning outcomes and pedagogy.

School Systems & Groups

Generate evidence for scale-up and procurement decisions.

Foundations / CSR / Investors

Verify outcomes and assess value for money of education investments.

Research Questions

Decisions This Study Answers

Learning Impact

How much did outcomes improve? Effect size (Cohen's d), gain scores, pass rate changes.

Pedagogical Impact

What changed in teaching? Time-on-task, questioning techniques, formative assessment, differentiation.

Implementation Fidelity

Was the program used as intended? Dosage, quality of delivery, reach across schools/teachers.

For Whom It Works

Subgroup analysis by grade, subject, gender, SEC, urban/rural, struggling vs. advanced cohorts.

Cost-Effectiveness

Cost per 0.1 SD gain, cost per additional student reaching proficiency. Value-for-money benchmarks.

Scale / Stop / Tweak

Evidence-based recommendations: what to scale, what to fix, what to sunset.

Study Designs We Use (Fit to Context)

Cluster RCT

Gold standard where feasible & ethical

Difference-in-Differences

Pre-post with comparison groups

Matched Comparison

Propensity score / exact matching

Mixed Methods

Quant + qual for complete picture

We recommend the minimum viable rigor that answers your question within your budget and timelines.

Measurement Framework

What We Measure (Education-Specific KPIs)

A) Student Learning Outcomes

  • Standardized tests / curriculum-aligned assessments (baseline → endline; optional midline)
  • Effect size (Cohen's d), gain scores, proficiency bands
  • Subgroup impacts by grade, gender, SEC, usage levels

B) Teaching-Learning Process

  • Classroom observations with structured rubrics: time-on-task, questioning, formative assessment
  • Teacher Knowledge, Attitudes & Practices (TKAP) surveys
  • PD/training participation and quality logs

C) Implementation & Usage

  • EdTech telemetry: active days, session length, completion, feature use
  • Textbook usage: lesson coverage, activity completion, homework adherence
  • School/class/teacher reach, dosage, and adherence rates

D) Cost & Feasibility

  • Cost per school / teacher / student reached
  • Training & support effort quantification
  • Cost-effectiveness benchmarks (₹ per 0.1 SD gain / per proficiency lift)
Process

Scope of Work (What We Do)

1
Scoping & Logic Model

Problem framing, outcomes map, theory of change, risks/assumptions, and sampling plan development.

2
Instruments & Protocols

Curriculum-aligned tests, observation rubrics, surveys/interview guides, usage data specs, consent/assent templates.

3
Sampling & Power Analysis

Sample size calculations, cluster design (class/school), randomization or matching plan for valid comparison.

4
Fieldwork & Quality Control

Enumerator training, instrument piloting, real-time monitoring, data validation, and bias safeguards.

5
Analysis & Triangulation

Pre-registered analysis plan; DiD/ANCOVA; multilevel (class/school) models; subgroup & sensitivity checks.

6
Reporting & Enablement

Technical report, board-facing executive summary, district/school feedback notes, and monitoring dashboard.

7
Scale-up & Product Feedback

Recommendations to improve content/app flows, teacher PD focus, and implementation playbooks.

Methodology (How We Keep It Rigorous & Practical)

Pre-registration of outcomes and analysis plan where appropriate

Missing data handling with transparent rules (MCAR/MAR checks)

Clustering & design effects accounted for in standard errors and CIs

Heterogeneity analyses by grade, gender, SEC, usage bands

Validity & reliability: item analysis, inter-rater checks for observations

Ethics & consent: child-safe protocols, data minimization, privacy-by-design

Attribution clarity: limit co-interventions, document context shocks

Transparent reporting with effect sizes, confidence intervals, limitations

Indicative Timeline

Weeks 0–2
Setup

Scoping, instruments, piloting, enumerator training

Weeks 3–6
Baseline

Baseline data collection + program launch

Weeks 7–12
Monitoring

Midline (optional), implementation support

Weeks 13–16
Endline & Report

Endline, analysis, reporting & read-outs

(Timelines flex by school term/calendar and access.)

What You Get

Deliverables

Study Protocol (design, instruments, sampling, analysis plan)
Data Collection Pack (tests, rubrics, surveys, consent forms)
Clean Datasets + Codebooks (analysis code on request)
Impact Report (technical + executive) with effect sizes & CIs
Implementation & Cost-effectiveness Brief
Dashboard (spreadsheet or web) for ongoing monitoring
Scale-Up Recommendations (product/content/PD/ops)
Textbook/Series Impact (Publishers)

Baseline/endline tests in target subjects; fidelity tracking via teacher logs & observations; subgroup analysis by grade/SEC; cost per proficiency gain. Evidence for adoption decisions and content improvement.

EdTech Impact (Apps/Tablets/LMS)

Usage telemetry linked to pre-post tests; teaching practice changes via rubrics; time-on-task and homework adherence; cohort outcomes by usage bands. Evidence for product-market fit and efficacy claims.

FAQs

Frequently Asked Questions

We run a power analysis upfront based on your expected effect size and outcome variance. Typical cluster RCTs require 20-40 schools (or classes) per arm. Smaller samples may work for larger expected effects or individual-level designs.

Yes—if we can establish a clean baseline and identify a reasonable comparison group (matched schools/classes or stepped-wedge rollout). Mid-year starts require careful design to ensure valid comparison.

Minimal—we reuse existing logs where possible and keep instruments short. Classroom observations are scheduled in advance and designed to be non-disruptive to teaching.

Yes—with guardrails. We produce a rigorous technical report and board-facing executive summary. Public claims should accurately reflect effect sizes, confidence intervals, and study context. We can help translate findings for marketing use.

Yes. We provide consent/assent templates, anonymize data, follow agreed retention policies, and apply privacy-by-design principles. IRB/IEC approvals are coordinated where required. Child-safe protocols are non-negotiable.

Need Credible Evidence on Your Education Intervention?

Let's scope your Impact Assessment Study and design research that answers your key questions.

Start the Conversation

+91-9891321279  |  aurobindo@raysolute.com