More than half of current online harassment cases happen on social media platforms. One abusive or toxic statement is sent every 30 seconds across the globe, using social media. The use of such language on social media contributes to mental or emotional stress, with one in ten people developing these issues. Various datasets on harassment, abusive language, and hate speech have been developed over the years. However, most of them use generic definitions or platform-specific guidelines to determine what content is abusive, resulting in conflicting definitions of what is considered abusive, which may lead to issues like mislabeling. Hate speech and various other forms of abusive language, like insults and defamation, are nationally regulated offences. So can legislation shed light on what are consistent labels for abusive language data? In this project, we are training a team of legal experts to label YouTube comments about controversial topics. The team will use a scale to label whether the comments are abusive. The scale includes labels to represent comments situated in a grey area.