India’s software sector leads global delivery. That edge depends on teams that move fast and still think clearly. Here is how Indian teams can use AI assistants and still build independent, resilient engineers. A randomised controlled study by GitHub and collaborators found a clear speed gain. Developers using Copilot finished a real web task about 55 per cent faster than those without it. The trial timed how long it took to set up a basic web server. Participants did it faster with the artificial intelligence (AI) assistant. Other trials and company tests also saw quicker code merges and better focus. Taken together, the results point to a pattern. When tools handle routine setup, work speeds up, and people feel more motivated.
However, speed alone does not provide a complete picture. A controlled security experiment showed a safety gap. Trained programmers using an AI helper wrote less safe code across many tasks, even as their confidence rose. This pattern is known as automation bias. Fast help can hide new risks. Checks and protections must match how serious the task is.
Researchers have also noticed bigger shifts in how programmers learn. An analysis in PNAS Nexus tracked the early months after large language models became public. Compared to forums where the models were limited or less effective, Stack Overflow Q&A decreased by approximately 25 per cent. The researchers say many people started asking questions in private chats instead. When less help stays in public view, beginners find fewer open guides. Future AI systems also receive poorer training data. Over time, that can hurt the craft of programming.
Recent user experiments saw verbatim copying. Many developers took the model’s suggestions even when they had mistakes or did not match the task. The same research noted clear gains in routine work. It also saw more struggles with open-ended problems with few examples. Research recommends simple habits for keeping thinking active. Check sources, run tests,
and pause to reason before accepting a suggestion.
Security experts point to another risk. Research from universities and companies shows weak default settings, leaks of passwords and tokens, and new attack tricks like prompt injection. This risk does not cancel the benefits of using such tools. We need careful oversight, many layers of defence, and habits that stop blind acceptance of any suggestion.
For India, the issue matters now. Firms in Bengaluru, Hyderabad, and Pune already deploy AI assistants at scale on client codebases. The gains are real. The risks carry delivery, liability, and reputation costs if security drifts or team learning stalls. The task is to retain the gains while guarding against drift in security and judgment.
Coding needs clear mental maps of how a program works. You build those maps with steady practice. Even if an AI assistant gives a code that runs, you still need to catch edge cases, limits on memory or time, and slow parts. Investigations report heavy use of AI can make developers search less, read fewer original sources, and try fewer approaches. A beneficial fix is switching between using AI help and working independently. This practice keeps curiosity and problem-solving strong while you still gain speed.
Do software teams need time without AI? Findings from many analyses say “yes”, if done with a clear plan. Here is a simple, research-based way for software teams to consider it.
Coding without help keeps core skills alive. It trains the brain to trace logic, read errors, and reason about cause and effect. These habits prevent long delays when tools fail or give a weak hint. Research reviews advise using AI-free practice where mistakes carry a high risk. That includes security work, handling many threads, fast code paths, and coding that moves core systems. Findings from trials report that AI assistants raise speed while safety can slip. AI-free time should scale with the level of risk. Practice without AI tools helps build careful security, privacy, and performance habits. This practice sharpens risk detection. It helps engineers spot unsafe inputs, race conditions, and slow paths before code reaches users.
Use a simple habit. First, try the task without AI. Then check the work with the tools. This routine builds trust in your skills. Avoid blind copying. It gives confidence to face challenging bugs, live outages, and customer questions. Studies also suggest measuring your work with and without AI help. In AI-free time, note how many bugs you find, how long you take to reach the root cause, and what security issues appear. When you work with AI help, perform the same tasks and write down any changes you make after the AI suggests code. Then compare the results with the no-AI period. Learn from the differences, since AI helps some tasks more than others.
Security hygiene remains essential. When AI tools read the web, a tricky page can push them toward unsafe code or wrong steps. Run automatic code checks before saving changes. Scan the code to ensure no passwords or keys are exposed. For sensitive parts, keep the timing constant for every input. That way, attackers cannot learn secrets through timing. During review, think like an attacker. List clear break points. Verify the code against that list. All this evidence points to a simple plan. Use AI assistants to work faster, but plan some AI-free time while keeping your own judgment strong. Software is a hands-on craft. Stay sharp by testing yourself and choosing protective habits. India’s software edge will then feel faster and safer.
The author was professor of electrical engineering, IIT Delhi, vice-chancellor, JNU and chairman, UGC
(Views are personal)