I’ve received numerous requests to comment on Twitter’s new plan to start flagging what it considers abusive content from verified government officials, representatives and candidates.  Like any other decent, rational, halfway intelligent person, I’ve thought twice about what my comments might be – and if I really want to comment at all.

 Easy access to the internet, the speed of information and the permanent availability of whatever we might post gives most people pause.  What do I want to say about this issue? Do my comments contribute something worthwhile to the conversation? Who might be influenced or offended by my words? And, in the end, do I really care?

These are just a few of the questions I pondered as I sat down to write this post.  But, apparently, Twitter no longer trusts those with a large number of followers to give any of it a moment’s thought.

 According to multiple sources, Twitter officials recently announced they’ll begin placing a notice over tweets that violate their standards regarding abusive or bullying behavior, but that they still deem to have some public value.  Users will have to click through the notice in order to view the original tweet, and also see a link to the following message: “The Twitter rules about abusive behavior apply to this Tweet. However, Twitter has determined that it may be in the public’s interest for the Tweet to remain available.”

On the surface, this may not seem significantly different from the motion picture industry’s rating system or the advisory notices posted prior to most on-demand programming.  But dig a little deeper and what makes Twitter’s proposed protocol particularly unsettling is their intention to apply it very selectively.  

For now, this new policy may apply solely to abusive or bullying language from political figures with 100,000 followers or more.  But why not from everyone with 100,000 followers? Or 50,000 followers? Or less? In other words, why not to everyone, period? Why not to you and me? 

The answer, of course, is because we’d never stand for it. 

For one thing, there’s no way of knowing who’s playing Big Brother and if objective criteria will be employed.  According to Twitter, employees across the company’s Trust and Safety, Legal, Public Policy and regional teams will determine whether a tweet is considered of public interest by evaluating factors including the “immediacy and severity of potential harm from the rule violation,” whether preserving the tweet will allow for public accountability, and whether it provides unique context not otherwise available.

Hmmmm.  Can anyone other than you possibly share your personal thoughts in the very same way you do?  Is that content otherwise available? Even if they could, does that negate your right to speak for yourself?

Even more troubling is the ambiguity of the term “abusive.”  If I post that someone “should be shot,” that’s clearly abusive (not to mention criminally negligent). The Supreme Court decided long ago that I have a right to free speech – until I yell “fire” in a crowded theater.  But what if I call someone an “idiot?” Or just suggest they’re horrible at their job? Is that “abusive?” Should the arbiters of decency and propriety at Twitter flag my post simply because I might hurt someone’s feelings?

But most troubling to me is the door this proposed policy opens and the potential it has to shut out an increasingly broader set of influencers.  If Twitter starts flagging tweets from political figures with at 100,000 followers, when will it begin doing the same to brands and marketers with large audiences?  What if someone in the back room decides they don’t like a word in our headline? That our image doesn’t show enough diversity? Or our message doesn’t provide “unique context” unavailable from other brands? 

This is about more than just Freedom of Speech.  It’s really about Freedom of Choice. Living in a free society being served by censorship-free media should mean you have the right to speak your mind and share your thoughts, no matter how ill-informed or offensive they might be.  In short, you have the right to be wrong – and the right to suffer the outrage of others who disagree with or downright despise you. 

We can’t outlaw ignorance.  We can’t legislate kindness.  All we can do is avail ourselves of the opportunity to answer and enlighten and fight back, using the freedom and resources afforded us.

Please don’t misunderstand.  I am no fan of political figures (and other thought leaders) who use social media to bully those who don’t share their opinions or ambitions.  But I am a rabid fan of freedom and the accountability that inevitably comes with it. This proposed reversal of policy by Twitter absolves digital bullies of that accountability.

From where I sit, as the CEO of a digital marketing firm, the biggest threat in social (and other) media isn’t a public opinion one shares out of choice, but the private data that can be collected and shared without our consent. 

There is a plethora of players accessing and leveraging information across today’s digital universe, and it’s created a huge black hole regarding the rights, roles and ownership of different types of data.  Who owns all that customer data being collected? Is it the consumer who chose to share it (knowingly or not)? Or those who invested all the time, money and effort in collecting it?

Because technology can move more rapidly than regulation, we’re living in a time when leading-edge AdTech is creating new paradigms, new loopholes and new conflicts we haven’t even begun to address.  Where are the boundaries? When should regulations be instituted? And who should ultimately decide?

Another benefit of living in a free society with a free market is that the consumer usually decides where to draw the line – and is doing so.  One of the biggest trends we’ve seen over the past 18 months is the evolving role consumers now play in marketing and the way their voices are being heard.  

As technology advances, we’re seeing a different type of advertising and a different kind of consumer; one much more cognizant that data is driving the ad impressions they see. There’s been an awakening that’s made them more curious about how brands are utilizing their data. “Why am I getting this? How do they know?”

With so much of their personal consumption, product exploration and shopping journey being performed digitally, consumers are also taking a more active, conscious role in determining what’s shared and what’s “private.”

At the same time, smart digital consumers also realize there’s a relationship between content and commitment; that customer data is the “currency” of the internet, and if they want all that online content without personal cost, then they’re going to pay with more personal information. 

But today’s consumer also wants greater equilibrium between what they share and what they get.  The proliferation of ad blocking software, anonymity in browsing and increased privacy settings is another way of consumers saying “Look, this is not in balance.” If we deliver good advertising and provide great customer experiences, consumers are going to be much more willing to open up and give us more useful data.

In this way, consumers are casting their vote in support or defiance of brands.  The consumer’s loyalty to or equity in a brand is no longer measured simply in terms of sales revenue, but also by the degree of access they choose to grant us.  

In the end, whether we’re talking about public opinion or private data, the best approach not only boils down to the principle of choice, but a matter of respect.   

As long as we have respect for each other, there is natural equilibrium, an effective give-and-take. If we believe we’re giving something of value, we need to feel we’re getting something valuable in return.  If there’s a genuine value proposition, then there’s a healthy relationship between tweeters and their followers, between brands and their customers.  

Respect has always been a critical part of any successful value proposition and is still the most important component to building consumer trust.