The problem is that human language is constantly changing. Those tools do a good job at spotting nudity and terrorist-related content, but they still struggle to stop misinformation from propagating.(2) Zuckerberg also told Congress in 2018 that AI tools would be “the scalable way" to identify harmful content. But Facebook’s faith in AI is clear on its own site, where it often highlights machine-learning algorithms before mentioning its army of content moderators. The stakes are also dangerously high for social media, where content can influence elections and fuel mental-health disorders. That’s worrying when it is touted for use in high-stakes applications like healthcare. When it comes to using the technology in the unpredictable real world, AI sometimes falls short. Designed to loosely emulate the human brain, deep-learning AI systems can spot tumors, drive cars and write text, showing spectacular results in a lab setting.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |