Impact Assessment Study for Textbooks, EdTech & Teaching-Learning
Rigorous research designs to measure what's working, for whom, and at what cost. We combine experimental methods, classroom observations, usage analytics, and cost-effectiveness analysis to inform your scale, stop, or tweak decisions.
Impact Assessment Overview
Impact Assessment Study is a research service by RAYSolute Consultants that measures the causal effect of education interventions on learning outcomes and teaching practices. For publishers, EdTech companies, school systems, and funders, we design and execute rigorous studies using RCTs, quasi-experimental methods (difference-in-differences, propensity score matching), classroom observations, and usage analytics. Our assessments determine effect sizes, implementation fidelity, subgroup impacts, and cost-effectiveness (cost per 0.1 SD gain). RAYSolute is led by Aurobindo Saxena, a Forbes India contributor with 23+ years of education sector experience.
Organizations We Serve
Publishing Houses
Measure subject/series effectiveness, adoption quality, and teacher usage patterns.
EdTech Companies
Evaluate apps, tablets, and LMS impact on learning outcomes and pedagogy.
School Systems & Groups
Generate evidence for scale-up and procurement decisions.
Foundations / CSR / Investors
Verify outcomes and assess value for money of education investments.
Decisions This Study Answers
Learning Impact
How much did outcomes improve? Effect size (Cohen's d), gain scores, pass rate changes.
Pedagogical Impact
What changed in teaching? Time-on-task, questioning techniques, formative assessment, differentiation.
Implementation Fidelity
Was the program used as intended? Dosage, quality of delivery, reach across schools/teachers.
For Whom It Works
Subgroup analysis by grade, subject, gender, SEC, urban/rural, struggling vs. advanced cohorts.
Cost-Effectiveness
Cost per 0.1 SD gain, cost per additional student reaching proficiency. Value-for-money benchmarks.
Scale / Stop / Tweak
Evidence-based recommendations: what to scale, what to fix, what to sunset.
Study Designs We Use (Fit to Context)
Cluster RCT
Gold standard where feasible & ethical
Difference-in-Differences
Pre-post with comparison groups
Matched Comparison
Propensity score / exact matching
Mixed Methods
Quant + qual for complete picture
We recommend the minimum viable rigor that answers your question within your budget and timelines.
What We Measure (Education-Specific KPIs)
A) Student Learning Outcomes
- Standardized tests / curriculum-aligned assessments (baseline → endline; optional midline)
- Effect size (Cohen's d), gain scores, proficiency bands
- Subgroup impacts by grade, gender, SEC, usage levels
B) Teaching-Learning Process
- Classroom observations with structured rubrics: time-on-task, questioning, formative assessment
- Teacher Knowledge, Attitudes & Practices (TKAP) surveys
- PD/training participation and quality logs
C) Implementation & Usage
- EdTech telemetry: active days, session length, completion, feature use
- Textbook usage: lesson coverage, activity completion, homework adherence
- School/class/teacher reach, dosage, and adherence rates
D) Cost & Feasibility
- Cost per school / teacher / student reached
- Training & support effort quantification
- Cost-effectiveness benchmarks (₹ per 0.1 SD gain / per proficiency lift)
Scope of Work (What We Do)
Scoping & Logic Model
Problem framing, outcomes map, theory of change, risks/assumptions, and sampling plan development.
Instruments & Protocols
Curriculum-aligned tests, observation rubrics, surveys/interview guides, usage data specs, consent/assent templates.
Sampling & Power Analysis
Sample size calculations, cluster design (class/school), randomization or matching plan for valid comparison.
Fieldwork & Quality Control
Enumerator training, instrument piloting, real-time monitoring, data validation, and bias safeguards.
Analysis & Triangulation
Pre-registered analysis plan; DiD/ANCOVA; multilevel (class/school) models; subgroup & sensitivity checks.
Reporting & Enablement
Technical report, board-facing executive summary, district/school feedback notes, and monitoring dashboard.
Scale-up & Product Feedback
Recommendations to improve content/app flows, teacher PD focus, and implementation playbooks.
Methodology (How We Keep It Rigorous & Practical)
Pre-registration of outcomes and analysis plan where appropriate
Missing data handling with transparent rules (MCAR/MAR checks)
Clustering & design effects accounted for in standard errors and CIs
Heterogeneity analyses by grade, gender, SEC, usage bands
Validity & reliability: item analysis, inter-rater checks for observations
Ethics & consent: child-safe protocols, data minimization, privacy-by-design
Attribution clarity: limit co-interventions, document context shocks
Transparent reporting with effect sizes, confidence intervals, limitations
Indicative Timeline
Setup
Scoping, instruments, piloting, enumerator training
Baseline
Baseline data collection + program launch
Monitoring
Midline (optional), implementation support
Endline & Report
Endline, analysis, reporting & read-outs
(Timelines flex by school term/calendar and access.)
Deliverables
Textbook/Series Impact (Publishers)
Baseline/endline tests in target subjects; fidelity tracking via teacher logs & observations; subgroup analysis by grade/SEC; cost per proficiency gain. Evidence for adoption decisions and content improvement.
EdTech Impact (Apps/Tablets/LMS)
Usage telemetry linked to pre-post tests; teaching practice changes via rubrics; time-on-task and homework adherence; cohort outcomes by usage bands. Evidence for product-market fit and efficacy claims.
Frequently Asked Questions
Need Credible Evidence on Your Education Intervention?
Let's scope your Impact Assessment Study and design research that answers your key questions.
Start the Conversation