
If 2025 was the year everyone talked about AI agents, 2026 is the year India started building them. The Agentic India hackathon series — running across multiple editions on Reskilll — has become the country’s premier event for developers who want to move beyond chatbots and into autonomous AI systems that can actually do things.
Across three editions, the Agentic India series has attracted over 2,200 teams and introduced thousands of Indian developers to agentic AI frameworks like LangChain, CrewAI, AutoGen, and Azure AI Agents SDK.
What Are AI Agents and Why Do They Matter?
An AI agent is software that can reason about a problem, make a plan, use external tools, and take actions — all without step-by-step human instructions for each action. Unlike a chatbot that simply responds to prompts, an agentic AI system can:
- Break complex tasks into subtasks — understanding that “research this topic” means searching, reading, comparing, and synthesizing
- Decide which tools to use — choosing between web search, code execution, API calls, or database queries based on the task
- Execute actions autonomously — actually performing searches, writing code, calling APIs, and processing results
- Evaluate and adjust — checking if results are satisfactory and trying different approaches if they’re not
- Complete multi-step workflows — chaining together dozens of actions to accomplish complex goals
This is the next frontier of AI development — and India’s developer community is jumping in through the Agentic India hackathon series.
The Agentic India Series: Three Editions, Growing Impact
Edition 1: Innoquest#3 / Agentic India (September – November 2025)
The first edition launched as Innoquest#3 in September 2025 and ran through November as an offline hackathon. It brought together 1,295 teams — making it one of the largest agentic AI hackathons not just in India, but globally.
Positioned as a platform where “innovation meets creativity,” the event encouraged developers, designers, and problem-solvers to collaborate on groundbreaking solutions to real-world challenges. Building on the success of earlier Innoquest editions, this one specifically channeled the energy toward agentic AI applications.
The scale of participation — 1,295 teams in the first edition — signaled that Indian developers were hungry for hands-on experience with AI agents, not just theoretical knowledge.
Edition 2: Innoquest#4 / Agentic India 30 (February 2026)
The second edition, running February 2-25, 2026, made a bold choice: it was specifically designed for first-year and early-year college students. With 867 teams participating, it proved that you don’t need years of experience to build AI agents — you need the right guidance and the right platform.
This 3-day hands-on hackathon introduced students to AI-driven agents, automation, and real-world problem solving. The format deliberately blended learning sessions with building time and collaborative work — helping students move from “what is an AI agent?” to “here’s the agent I built” in just three days.
The focus on early-year students was intentional and strategic. By introducing agentic AI concepts in their first or second year of college, participants gain a multi-year head start over peers who won’t encounter these technologies until their final year projects or first job.
Edition 3: Agentic India 30 (March 2026)
The most recent and most intensive edition — a 30-hour, mentor-led sprint running March 18-25, 2026 — pushed participants to design autonomous or tool-using AI agents that solve high-impact Indian use-cases.
With 79 teams in this more selective, intensive format, the quality bar was higher. Participants could use any agent framework they preferred:
- Semantic Kernel — Microsoft’s framework for enterprise-grade agent applications
- AutoGen — Microsoft’s multi-agent conversation framework
- Azure AI Agents SDK — cloud-native agent development
- LangChain — the most popular open-source agent framework
- LlamaIndex — optimized for agents working with large document collections
- CrewAI — designed for multi-agent collaboration systems
- MCP (Model Context Protocol) — the emerging standard for tool integration
Teams were encouraged to integrate real tools and APIs — building agents that could actually interact with the world, not just generate text about interacting with it.
What Participants Built
The solutions from the Agentic India series reflect the breadth of problems that AI agents can tackle in the Indian context:
Education Agents
Teams built agents that act as personalized AI tutors — understanding a student’s learning gaps through diagnostic questions, finding relevant resources across the web, creating practice problems at the right difficulty level, and adapting the learning path based on performance. Unlike static learning platforms, these agents actively guide the learning journey, adjusting in real-time.
Government Service Navigation Agents
Several teams created agents that help citizens navigate India’s complex government services — from finding the right office and understanding eligibility criteria to filling out forms correctly and tracking application status across multiple departments. These agents handle the bureaucratic complexity that makes government services frustrating for ordinary people.
Healthcare Coordination Agents
Teams built agents that coordinate between patients, doctors, and diagnostic labs — scheduling appointments based on availability and urgency, following up on test results, sending medication reminders, and ensuring nothing falls through the cracks in India’s fragmented healthcare system where patients often manage their own care coordination.
Agricultural Advisory Agents
Agents that monitor weather data, soil conditions, market prices, and crop calendars to give farmers actionable, timely advice — when to plant, when to irrigate, when to harvest, where to sell for the best price, and what crop diseases to watch for based on current conditions.
Legal Research Agents
Agents that can research Indian legal precedents, summarize relevant case law, identify applicable sections of acts and regulations, and help lawyers and citizens understand their legal position — making legal knowledge more accessible.
Why the Mentor-Led Format Works for Agentic AI
One thing that sets the Agentic India series apart from generic hackathons is the emphasis on mentorship. Building AI agents is genuinely hard — the frameworks are new and evolving rapidly, the design patterns are unfamiliar, debugging autonomous systems is fundamentally different from debugging traditional code, and the failure modes are unpredictable.
Through MentorVerse, participants had access to experienced mentors who could help them:
- Choose the right framework for their use case
- Design effective tool-use patterns and agent architectures
- Debug agent behavior when it goes off-track
- Handle edge cases and failure modes gracefully
- Optimize agent performance and reduce API costs
This mentorship component transforms a hackathon from a stressful coding sprint into a genuine learning experience where participants build skills they’ll use throughout their careers.
The Technology Landscape
The Agentic India series is deliberately framework-agnostic, and the diversity of tools used reflects the healthy, rapidly evolving state of the agentic AI ecosystem:
- LangChain — the most widely used framework among participants, popular for its extensive tool integrations and large community
- CrewAI — favored for multi-agent systems where different specialized agents collaborate on complex tasks
- AutoGen — Microsoft’s framework, popular among teams using Azure services and building conversational agent systems
- Semantic Kernel — chosen by teams building enterprise-grade applications with strong typing and planning capabilities
- LlamaIndex — preferred for agents that need to work with large document collections and knowledge bases
The fact that no single framework dominates is a sign of a healthy ecosystem. The hackathon encouraged experimentation, and many teams tried multiple frameworks before settling on one — learning valuable lessons about trade-offs in the process.
Impact on India’s AI Ecosystem
The Agentic India series is doing something strategically important for India’s tech ecosystem: it’s building a community of developers who understand agentic AI not just in theory, but through hands-on practice building real systems.
Over 2,200 teams across three editions means thousands of developers who have:
- Built working AI agents from scratch
- Learned to work with modern agent frameworks
- Understood the design patterns for autonomous systems
- Experienced the challenges of debugging non-deterministic AI behavior
- Connected with a community of like-minded builders
This matters because agentic AI is where the industry is heading. Companies across India and globally are actively hiring developers who can build agent-based systems, and the demand far exceeds the supply. Participants in the Agentic India series are positioning themselves at the front of this wave.
What’s Next for Agentic India
The series continues to evolve with each edition — from a broad innovation hackathon (1,295 teams) to a beginner-focused learning event (867 teams) to a targeted 30-hour intensive sprint (79 teams). Each format serves a different audience and purpose.
If you’re interested in agentic AI — whether you’re a complete beginner or an experienced developer looking to build autonomous systems — keep an eye on Reskilll’s hackathon listings for the next edition. The community is growing, the problems are getting more interesting, and the technology is advancing faster than ever.
Participated in the Innoquest#4 edition as a first-year student. Honestly, I had no idea what AI agents were before this. By the end of 3 days, our team had a working agent that could search the web and summarize research papers. The mentors were incredibly patient with beginners like us.
The 30-hour sprint format in the March edition was intense but amazing. We used CrewAI to build a multi-agent system for healthcare appointment coordination. The framework-agnostic approach was great — teams could use whatever they were comfortable with.