Using "web crawler" software designed to search, index and back up a website, 30-year-old Snowden "scraped data out of our systems" while he went about his day job, The New York Times quoted a senior intelligence official as saying.
"We do not believe this was an individual sitting at a machine and downloading this much material in sequence," the official said.
The process by which Snowden gained access to a huge trove of the country's most highly classified documents, he said, was "quite automated" and the former CIA contractor kept at it even after he was briefly challenged by agency officials.
Snowden's "insider attack," by contrast, was hardly sophisticated and should have been easily detected, investigators found.
Snowden had broad access to the NSA's complete files because he was working as a technology contractor for the agency in Hawaii, helping to manage the agency's computer systems in an outpost that focuses on China and North Korea.
A web crawler, also called a spider, automatically moves from website to website, following links embedded in each document, and can be programmed to copy everything in its path.
According to media reports, Snowden had traveled to India in 2010. He spent six days in New Delhi, taking courses in "ethical hacking," where he learned advanced techniques for breaking into computer systems and exploiting flaws in software, the reports said.
Among the materials prominent in the Snowden files are the agency's shared "wikis," databases to which intelligence analysts, operatives and others contributed their knowledge. Some of that material indicates that Snowden "accessed" the documents. But experts say they may well have been downloaded not by him but by the programme acting on his behalf.
