Facebook says none of the approximately 200 people who saw the New Zealand mosque live in New Zealand pointed out to moderators that technology companies are challenging to monitor violent or disturbing content in real time.

The social media giant published in a blog post new details about his reaction to the video. The shooter's 17-minute live broadcast was viewed less than 200 times, and the first user report came 12 minutes after the report ended. In Christchurch, 50 people were killed in two mosques. Facebook removed the video "in minutes" after being notified by the police, said Chris Sonderby, Facebook's deputy general counsel.

"During the live broadcast, no users reported the video," said Sonderby.

"We continue to work around the clock to prevent this content from appearing on our website, using a combination of technology and people."

Facebook has previously said that in the first 24 hours after the massacre, 1.5 million videos of the attacks were removed "of which over 1.2 million were blocked during upload," meaning that 300,000 copies were successfully transferred to the site before they were dismantled.

The rapid spread of the video on the Internet is again putting pressure on Facebook and other social media sites, such as YouTube and Twitter, in terms of content moderation. Many wonder why Facebook just could not detect and remove the video faster.

RELATED: "S *** Hole Country": Shocking NZ Quote

RELATED: Jacinda's touching note to the victims

RELATED: World leader ill reaction on shootings

On Tuesday, New Zealand Prime Minister Jacinda Ardern expressed disappointment that the footage was online four days after the killings. She said she received "some communication" from Facebook's Chief Operating Officer Sheryl Sandberg on the subject.

"It's awful, and while they gave us these assurances, the ultimate responsibility lies with them."

Facebook uses artificial intelligence and machine learning to detect annoying material while relying on the public to flag content that violates its standards.

These reports are then sent to human reviewers who decide what action to take, the company said in a video in November. It also described how "Computer Vision" is used to detect 97 percent of graphic violence before anyone reports it. However, it is less clear how these systems apply to the live streaming of Facebook.

To report live video, a user needs to know that they need to click on a small set of three gray dots on the right side of the post. When you click "Report live video," you can choose from a selection of unwanted content types, including violence, bullying, and harassment. You are also encouraged to contact the law enforcement authorities in your area if someone is in immediate danger.

Before the company was made aware of the video, a 8chan user had already posted a link to copy on a file sharing page, Sonderby said.

8chan is a dark corner of the Internet where those who are not enthusiastic about popular social media websites often have extremist, racist and violent views.

The Global Internet Forum to Counter Terrorism, a group of global Internet companies led by Facebook, YouTube, Microsoft and Twitter, said the video was distributed by those who wanted to share it with others, adding it added shared database more than 800 different versions used to block violent terrorist images and videos.

The group said it added "digital fingerprints" to their database for visually distinct versions of the video. The step was taken in response to attempts by Internet users to release the video by editing or repackaging versions with different digital fingerprints to avoid detection.

"The incident highlights the importance of industry collaboration in terms of the range of online terrorists and violent extremists," said the group, which was founded in 2017 in response to official pressure to continue to fight online extremism.

In a series of tweets the day after the shootings, Alex Stamos, former security officer for Facebook, challenged tech companies as they wanted to keep up with the new versions of the video.

"Every time this happens, companies need to recognize this and create a new fingerprint," Stamos said. "What you see on the big platforms is the water that runs around thousands of fingers that have been stabbed in a dam," Stamos said. The big tech companies are blocking more than 99 percent of the videos being uploaded. That's not enough to make it impossible to find. "

,