They reported a word error rate (WER) of 5.9 per cent, down from the 6.3 per cent WER the team reported just last month.
The 5.9 per cent error rate is about equal to that of people who were asked to transcribe the same conversation, and it is the lowest ever recorded against the industry standard Switchboard speech recognition task.
"We've reached human parity. This is a historic achievement," Xuedong Huang, the company's chief speech scientist said in a blog post.
The milestone means that, for the first time, a computer can recognise the words in a conversation as well as a person would.
In doing so, the team beat a goal they set less than a year ago - and greatly exceeded everyone else's expectations as well.
Over the decades, most major technology companies and many research organisations joined in the pursuit.
"This accomplishment is the culmination of over twenty years of effort," said Geoffrey Zweig, who manages the Speech and Dialogue research group.
That includes consumer entertainment devices like the Xbox, accessibility tools such as instant speech-to-text transcription and personal digital assistants such as Cortana.
"This will make Cortana more powerful, making a truly intelligent assistant possible," Shum said.
The research milestone does not mean the computer recognised every word perfectly. In fact, humans do not do that, either.
Instead, it means that the error rate - or the rate at which the computer misheard a word like "have" for "is" or "a" for "the" - is the same as you would expect from a person hearing the same conversation.
The push that got the researchers over the top was the use of neural language models in which words are represented as continuous vectors in space, and words like "fast" and "quick" are close together.
"This lets the models generalise very well from word to word," Zweig said.