My understanding is that the platform protects free speech and the rights of others doesn't reduce people's freedom of speech if it doesn't cause actual harm.
Arguably there's a difference between 'actual harm', 'risk of harm' and 'perceived harm'.
'Actual harm' could happen from a violent crime. 'Risk of harm' could happen from defamation, which is also a crime. 'Perceived harm' can happen when we feel 'offended'.
I suspect feeling offended will not justify the removal of content from the platform.
However feel free to contribute to clarity on this and share suggestions regarding what you feel should be regarded as 'inappropriate'.
My understanding is that you can share suggestions here in the questions section.
Also my understanding, or my suggestion, is that if anyone wants a 'safe space' with regulated speech they can create a private group and create their own rules and group culture within that particular group.
That way free speech can be respected on the platform while there can also be designated places for 'restricted' / 'regulated' forms of communication for those who feel a need for more 'protected spaces'. These spaces can also be respected.
We can all coexist with our different needs that way 😊🙏🏽