Twitter

Twitter and Emergency Communications

FiretruckDisclaimer: the opinions represented are my own and not that of my employer.

Yesterday, I read Dan Lohrmann's, "Fake Tweets: can you trust them for emergency communications". It reminded me of a conversation that Andy Carvin and myself had back in 2007, around the time when my interest was waning on the Alert Retrieval Cache (ARC) - a humanitarian precursor to Twitter I was involved in.  

My main sticking point with Twitter at the time was that because it is an open forum, the way people get their signal from the noise is through tags. I've written about some of the trouble with tagging, but in the context of Twitter there is a larger issue: space. You have enough space for a short message, but to get to your audience - since not everyone is watching your tweets with bated breath 24/7 - you have to tag things, and in tagging things you end up with less space to spread your message within a tweet. In fact, as they say in Trinidad and Tobago, you can easily end up in a situation where 'the candle costs more than the funeral'. As Andy pointed out in his resultant blog entry:

Anyway, I know tools like Twitter weren’t designed for saving lives. But that doesn’t mean they wouldn’t if they had a few more features and were put to use properly. -andy

It actually wouldn't be too hard for Twitter to shove tags into it's metadata, but they closed down access to their metadata some time ago.

But wait, there's more.

In the context of emergency communication - above all else - there is a need for trusted sources - something Dan highlights in his post. In the ARC, this was one of the big problems that had to be solved by human mediators - establishing in an emergency, with no prior communication, which people communicating with ARC were trusted sources. The advantage of ARC in this regard is that people who were untrusted sources could be pulled from the network. Meanwhile on Twitter, being an untrusted source only means the too easily trusting have their intelligence abused, as The New York Times shows.

The first step in any such situation on an open forum like Twitter should be verification, but on Twitter, retweets (RTs) and rephrasing of the same thing can make verification possible. Further, responding to the offending tweets makes them popular, and that in turn makes them more likely to be seen as they begin to trend. Things easily get out of control.

On closed forums - controlled forums - people have trusted sources already, sometimes directly, or sometimes through companies such as Emergency Communications Network (where I work, but again, these are my opinions and not those of ECN). People need trusted sources during times of emergency and random people on Twitter are not as likely to be trusted sources as the particular agency responsible for their areas. Direct texting allows for messages to be sent directly to people who are impacted, whereas Twitter requires staring at a feed. Of course, these days, the same sources also tweet, but they also directly call or contact people. Why? Because it's important. Because fake tweets happen.

Because trust is the basis for any network, and random people on Twitter aren't as trustworthy as people would apparently like to think.

Another Lesson in Customer Service: Paypal.

A few weeks ago, having just started a new job and about to get my first full check, I checked my bank account balance online to find that $299 had been taken out by PayPal. It drove me into a negative balance in my checking account the day before payday. I immediately logged into my bank's website, disputed the claim, then went to PayPal where I cancelled the transaction and reported it as fraudulent. I changed my password and security questions.

Your Digital Signature: Who Do You Want To Be Defined By?

ConnectionsThe Web does not just connect machines, it connects people.

- Tim Berners-Lee, Tim Berners-Lee Speech before Knight Foundation, 14 September 2008

 

Often people involved in any type of Internet development need to be reminded that behind all the algorithms, IP addresses and gadgets there are actual human beings. Human beings have a different context than any gadgets to date because, despite the efforts of computer scientists to create a Turing machine, our context is different and varies from person to person.

Because of that I appreciated, 'How Facebook Builds A Dignature Signature for You (and Your World)'. It puts a very positive spin on things that anyone interested in privacy cringe. Reality is somewhere between the positive and the negative of that - something that will keep people arguing and writers with no end of content for decades, perhaps centuries. Beyond that level it gets even more interesting.

While the article talks about Facebook, it's important to note that the algorithms being discussed aren't Facebook-specific. Many websites do it - better, many companies do it. Churning behind the scenes at Google, Microsoft's Bing, Amazon.com, eBay, Twitter, Instagram and... just about anything that is 'free' to use and is popular have such algorithms churning away - getting to understand you so well that I'm surprised economists haven't clued in on the data publicly. Our behaviors, our expectations and much more are available through such sites.

When it comes to social media, it's not just about what you say about yourself. It's almost always dominated by what others say about you - and when it comes to your bank transactions, your bank is likely selling your anonymized purchase history to someone for a price.

Yet, as the article also points out, the analysis of the data can be flawed. Daniel J. Solove wrote about this in The Digital Person (2004). You can easily be persona non grata  for peculiar reasons determined by algorithms as well. It's a fickle thing, these algorithms, and it's not too hard to understand that what you connect with is determined by these algorithms. Those connections are, by and large, used to drive things to you or to determine what shouldn't be driven at you in advertising. When the government comes into play it becomes more peculiar, particularly when it has to do with governments, employers or anyone else nosey enough to pay for such data.

But again, that's not really what I'm writing about.

What I am writing about is that the algorithms for these connections constantly evolve and are becoming better at determining the small slices of reality. As the article says (link):

The trouble is that our real world — and how we describe and experience it -– is constantly changing.

User experiences and things related to user experiences are constantly evolving. As an example - when you click 'Like' on Facebook, there's an assumption being made in some algorithm somewhere about what you mean. When you retweet something on Twitter, what does that mean? That's the job of algorithms.

Before you start wrapping your head in tinfoil and locking your tapioca pudding in the locked refrigerator thinking that this is all done in top secret offices of the government or corporations bent on creating Skynet, hit pause. It happens every day. Traffic lights are a brilliant example. Actual data is used to determine the length of lights being red, yellow and green (or should be). The supermarket that has that card tracks what your purchase history is like so that they can decide what their inventory should look like. And, if you think about it, all this data is flawed in that it limited to a dataset and the algorithms that chew away on it.

As a programmer, I expect I've had more to do with this sort of thing than a non-programmer would - but it's all very real data. You might think that your medical history is top secret but if your bank is getting itemized billing for your medication, they might know more about you than you think - and that information isn't as sacred.

I'm no Luddite by any stretch, but people need to understand the responsibility of software development has increased in this context across the board. How much is too much? How little is too little?

And who do you want to decide?

Personally, I see the need for all of this on many different levels but I'm concerned that people don't understand that the data that they put out there can be eventually used for decisions that may impact their lives. The teenagers posting things on Facebook may not realize that the risque video might have unintended consequences a few years into the future when they're looking for a job.

Take a moment and think about it. It's not all bad, it's not all good - but it's definitely real. Be aware, be responsible.

 

Picture at top left courtesy Matthew Montgomery via this Creative Commons License.

Pages