Impact

Transparent Unit Economics. Data-Driven Outcomes. Built for Scale.

Mobiloitte Foundation delivers AI literacy and safe AI usage programs through a partner-led model designed for replication, cost-efficiency, and evidence-based reporting. This page explains how costs are structured, what outcomes we measure, and how partners receive funder-ready reporting.

Executive Summary

The Investment Case for
Scalable AI Literacy

We operate an asset-light, partner-led model optimized for rapid replication and measurable developmental outcomes. This summary provides the core pillars of our financial and impact logic.

Low
Unit Cost
High
Replication Speed

Cost Per Learner

Optimized through existing partner infrastructure and shared devices.

What Funders Get

Transparent, auditable impact data and end-of-cohort reporting packs.

Reporting Cadence

Weekly internal tracking with monthly and end-of-cohort funder dashboards.

Scale Model

Powered by a standardized Trainer Academy and implementation playbooks.

Transparency First

Why Unit Economics Matters

Funders and CSR teams increasingly expect transparency and accountability

Clear cost per learner (and what it includes)

Transparent cost drivers and efficiency levers

Defined KPIs and reporting cadence

Responsible safeguards for children and vulnerable youth

A plan for replication and scale across multiple sites

We design for these expectations from day one.

Every program is built with funder-ready reporting and transparent cost structures

Unit Economics: How We Price and Plan

The core unit: a Cohort

Typical Cohort Structure (customizable)

[20–30]

Learners per cohort

[6–10]

Sessions

[3–8 weeks]

Duration

In‑person or hybrid

Mode

Replace bracketed ranges with your real model once finalized.

Cost Model: What's Included

A cohort budget typically includes five key components

Facilitation & Delivery

  • Trainer/facilitator time
  • Session preparation and quality assurance
  • Learner support (remediation, guidance)

Curriculum & Materials

  • Facilitator kit (slides, scripts, worksheets)
  • Learner workbooks / handouts
  • Local language adaptation (where required)

Safeguarding & Controls

  • Consent flows and safeguarding compliance
  • Supervision and code‑of‑conduct enforcement
  • Safe usage protocols for learners

Measurement & Reporting

  • Baseline/endline assessments
  • Attendance and completion tracking
  • Outcome dashboard (aggregated)
  • End‑of‑cohort report for partners/funders

Operations & Enablement

  • Partner onboarding and readiness check
  • Train‑the‑trainer support (if applicable)
  • Coordination, monitoring, and documentation

Cost Per Learner: How We Calculate It

Clear, auditable formulas for transparent reporting

Cost per enrolled learner

Total cohort cost ÷ # learners enrolled

Cost per completed learnerRecommended

Total cohort cost ÷ # learners who complete

Cost per learner with outcomes trackedRecommended

(Cohort cost + M&E cost) ÷ # learners with baseline & endline

For funder reporting: We recommend emphasizing cost per completed learner and cost per learner with outcomes tracked, since they reflect program effectiveness and accountability.

What Drives Cost Up or Down

Key levers for efficiency and quality

Efficiency Levers (Reduce Cost)

Reduce unit cost without reducing quality

  • Using existing partner infrastructure (labs/classrooms)
  • Train‑the‑trainer to reduce reliance on external trainers
  • Standardized curriculum + reusable teaching assets
  • Shared device pools and rotated schedules
  • Hybrid practice assignments (where safe and feasible)
  • Cohort batching (multiple cohorts at a site per quarter)

Cost Drivers (Increase Budget)

May increase budget but improve outcomes

  • Higher supervision requirements for minors/vulnerable groups
  • Local language adaptation and accessibility design
  • Additional remediation support for foundational learning gaps
  • Independent evaluation or third‑party verification
  • Device procurement / repairs / connectivity upgrades
Flexible Funding

Sponsorship / Co‑Funding Options

Flexible funding models to match your impact goals

Option A

Sponsor a Cohort

Fastest, most common

Fund delivery for [X] learners at [1] partner site

Includes:

  • Outcome tracking
  • End‑of‑cohort reporting pack

Budget range:

₹ / $ TBD based on cohort model

Option B

Multi‑Cohort Rollout

Scale

[3–6] cohorts across [2–5] sites

Includes:

  • Trainer academy + QA checks
  • Consolidated dashboard

Budget range:

₹ / $ TBD

Option C

Trainer Academy

Capacity building

Train and certify [X] facilitators

Includes:

  • Coaching + delivery QA
  • Refresher pathway
  • Best for networks with multiple centers

Budget range:

₹ / $ TBD

Option D

Devices / Infrastructure

Offline‑First Enablement

Device pool (shared laptops/tablets)

Includes:

  • Connectivity and setup support
  • Optional offline-first learning support design

Budget range:

₹ / $ TBD

Option E

Measurement & Evaluation

Evidence

Stronger proof for RFPs and scale funding

Includes:

  • Independent evaluation (optional)
  • Publishable learning brief

Budget range:

₹ / $ TBD

Outcomes We Measure

What partners can expect across four outcome domains

AI Literacy & Practical Usage

  • Understanding AI basics and limitations
  • Prompting skills for useful results
  • Ability to confirm/verify critical information (habits taught)

Safety Competence & Responsible AI

  • Recognizing scams/misinformation/deepfakes
  • Privacy awareness and safe data practices
  • Responsible usage norms (respect, consent, boundaries)

Learning Enablement

  • Study planning, practice workflows, and confidence
  • Capstone project completion (applied learning)

Livelihood Readiness (18+ track)

  • CV/cover letter readiness
  • Interview practice participation
  • Portfolio artifacts and professional communication

KPI Library

Recommended indicators funders commonly value

Important: These are recommended indicators—not promises. Targets should be set after baseline data.

Access & Completion

  • Enrollment count
  • Attendance rate
  • Completion rate
  • Drop‑off reasons (to improve design)

Learning Outcomes

  • Baseline → endline AI literacy score change
  • Capstone submission rate + quality rubric score
  • Skill demonstration checklist completion

Safety Outcomes

  • Safety scenario competency score change
  • Privacy habits checklist adoption

Implementation Quality

  • Facilitator delivery fidelity score
  • Safeguarding compliance checklist score
  • Partner satisfaction score

Optional (18+ outcomes)

  • CV completion rate
  • Mock interview completion rate
  • Job application activity (where applicable and ethical to track)

Reporting Cadence

Funder‑ready reporting at every stage

What Partners Receive

  • Implementation plan (pre‑launch)
  • Cohort report (end of cohort) including KPIs, learnings, photos/videos where consented
  • Aggregated dashboard (monthly or quarterly for multi‑site deployments)
  • Risk & safeguarding report (incident logs, compliance checklist summary—non-identifiable)

Typical Cadence

  • Weekly (internal): Attendance, delivery notes, safeguarding checks
  • Monthly (partners/funders): Cohort progress + early indicators
  • End‑of‑cohort: Baseline/endline outcomes + capstones + learnings
  • Quarterly (scale partners): Consolidated portfolio reporting across sites

Governance, Safeguards & Risk Management

Building investor confidence through responsible practices

Safeguarding-first delivery

  • Code of conduct for staff/volunteers
  • Consent-first participation and storytelling
  • Supervision standards for minors and vulnerable youth
  • Clear grievance and escalation mechanism
  • No public sharing of identifiable learner details

Responsible AI standards

  • Safe usage rules taught and enforced
  • "Human-in-the-loop" facilitation
  • No harmful prompts or personal data sharing
  • Strict usage norms and boundaries

Privacy-by-design

  • Minimal data collection
  • Aggregated reporting by default
  • Access controls and encryption

Scalability & Replication

How we grow without losing quality

Standardized curriculum and playbooks

Trainer academy + certification pathway

Partner-led delivery model

Quality assurance checks + implementation coaching

Data-backed continuous improvement loops

What We Need From Funders/Partners

To maximize outcomes and speed-to-scale

Multi-cohort commitments (preferred) for lower unit cost and stable delivery

Support for devices/connectivity where needed

Funding for M&E and safeguarding (often underfunded but essential)

Flexibility to localize curriculum (language, context, age appropriateness)

Optional third-party evaluation for large-scale deployments

Want a Costed Rollout Plan with Measurable KPIs?

We'll share a customized cohort plan, budget structure, and reporting template aligned to your CSR requirements.

Frequently Asked Questions

Do learners need coding experience?

No. This is AI usage literacy—practical skills, prompting, safety, and productivity.

Can this be deployed through our existing learning centers?

Yes. Our model is designed to integrate into partner labs/classrooms with train-the-trainer support.

How do you prove outcomes?

We use baseline/endline assessments, attendance/completion tracking, capstone rubrics, and safety scenario checks—reported in aggregated dashboards.

Can you support low-connectivity environments?

Yes. We design offline-friendly delivery patterns and can adapt the deployment approach based on site readiness and safeguarding constraints.