ALSO READWEF 2018: We're still counting on Trump to attend, say Davos organisers Modi's WEF speech in Davos 'reason of pride' for Indians: Amit Shah For Budget 2018, FM Arun Jaitely skip WEF 2018 summit at Davos Davos 2018: 'Together we can', PM Narendra Modi's message to CEOs A stable, transparent and progressive India is good news: PM Modi in Davos
"No-one wants to be known as 'the terrorists' platform" or the first choice app for pedophiles," May is expected to say according to excerpts released by her office ahead of her speech Thursday at the World Economic Forum in Davos. "Technology companies still need to go further in stepping up their responsibilities for dealing with harmful and illegal online activity."
Investors in social media businesses such as Facebook Inc, Twitter Inc and Google's YouTube, a part of Alphabet Inc, will be asked "to consider the social impact of the tech companies they are investing in".
In a recruitment drive among the global elite, May wants those with the biggest stakes in these companies to pile on pressure as well.
At stake is how to stop social media being used as platforms for extremist propaganda, hate speech, child sexual exploitation or human trafficking. The companies under fire have showcased efforts to use artificial intelligence to stop such content from appearing online.
After two years of repeatedly bashing social media companies, May will say that successfully harnessing the capabilities of AI -- and responding to public concerns about AI's impact on future generations -- is "one of the greatest tests of leadership for our time".
Facebook recently told UK lawmakers it now removes 83 per cent of terrorist content within one hour. YouTube told the same Parliamentary committee that it removes 50 per cent of such content within two hours and 70 per cent within eight hours. Twitter said it now identifies and removes 75 per cent of accounts posting terrorist content before they issue a single tweet.
While May is expected to acknowledge "some progress" on the part of technology companies, she'll stress the need to go further to have content removed automatically. They "must focus their brightest and best" toward that goal, she will say.
To be sure, the adoption of the most cutting-edge technology for this purpose raises a whole other can of worms -- also around ethics and the extent to which government should or even can regulate AI.