A few weeks ago, having just started a new job and about to get my first full check, I checked my bank account balance online to find that $299 had been taken out by PayPal. It drove me into a negative balance in my checking account the day before payday. I immediately logged into my bank's website, disputed the claim, then went to PayPal where I cancelled the transaction and reported it as fraudulent. I changed my password and security questions.
Often people involved in any type of Internet development need to be reminded that behind all the algorithms, IP addresses and gadgets there are actual human beings. Human beings have a different context than any gadgets to date because, despite the efforts of computer scientists to create a Turing machine, our context is different and varies from person to person.
Because of that I appreciated, 'How Facebook Builds A Dignature Signature for You (and Your World)'. It puts a very positive spin on things that anyone interested in privacy cringe. Reality is somewhere between the positive and the negative of that - something that will keep people arguing and writers with no end of content for decades, perhaps centuries. Beyond that level it gets even more interesting.
While the article talks about Facebook, it's important to note that the algorithms being discussed aren't Facebook-specific. Many websites do it - better, many companies do it. Churning behind the scenes at Google, Microsoft's Bing, Amazon.com, eBay, Twitter, Instagram and... just about anything that is 'free' to use and is popular have such algorithms churning away - getting to understand you so well that I'm surprised economists haven't clued in on the data publicly. Our behaviors, our expectations and much more are available through such sites.
When it comes to social media, it's not just about what you say about yourself. It's almost always dominated by what others say about you - and when it comes to your bank transactions, your bank is likely selling your anonymized purchase history to someone for a price.
Yet, as the article also points out, the analysis of the data can be flawed. Daniel J. Solove wrote about this in The Digital Person (2004). You can easily be persona non grata for peculiar reasons determined by algorithms as well. It's a fickle thing, these algorithms, and it's not too hard to understand that what you connect with is determined by these algorithms. Those connections are, by and large, used to drive things to you or to determine what shouldn't be driven at you in advertising. When the government comes into play it becomes more peculiar, particularly when it has to do with governments, employers or anyone else nosey enough to pay for such data.
But again, that's not really what I'm writing about.
What I am writing about is that the algorithms for these connections constantly evolve and are becoming better at determining the small slices of reality. As the article says (link):
The trouble is that our real world — and how we describe and experience it -– is constantly changing.
User experiences and things related to user experiences are constantly evolving. As an example - when you click 'Like' on Facebook, there's an assumption being made in some algorithm somewhere about what you mean. When you retweet something on Twitter, what does that mean? That's the job of algorithms.
Before you start wrapping your head in tinfoil and locking your tapioca pudding in the locked refrigerator thinking that this is all done in top secret offices of the government or corporations bent on creating Skynet, hit pause. It happens every day. Traffic lights are a brilliant example. Actual data is used to determine the length of lights being red, yellow and green (or should be). The supermarket that has that card tracks what your purchase history is like so that they can decide what their inventory should look like. And, if you think about it, all this data is flawed in that it limited to a dataset and the algorithms that chew away on it.
As a programmer, I expect I've had more to do with this sort of thing than a non-programmer would - but it's all very real data. You might think that your medical history is top secret but if your bank is getting itemized billing for your medication, they might know more about you than you think - and that information isn't as sacred.
I'm no Luddite by any stretch, but people need to understand the responsibility of software development has increased in this context across the board. How much is too much? How little is too little?
And who do you want to decide?
Personally, I see the need for all of this on many different levels but I'm concerned that people don't understand that the data that they put out there can be eventually used for decisions that may impact their lives. The teenagers posting things on Facebook may not realize that the risque video might have unintended consequences a few years into the future when they're looking for a job.
Take a moment and think about it. It's not all bad, it's not all good - but it's definitely real. Be aware, be responsible.
It's official, for better or worse: 'Tweet' is now recognized in the Oxford dictionary despite breaking at least one OED rule: It's not 10 years old yet.
'Big Data' also made it in, as did 'crowdsourcing', 'e-reader', 'mouseover' and 'redirect' (new context). There's a better writeup in the June 2013 update of the Oxford English Dictionary (OED) that also dates the use of the phrase, "don't have a cow, man" back to 1959 - to the chagrin of Bart's fans everywhere, I'm sure.
As a sidenote, those that use twitter are discouraged from being twits and 'sega' is actually a dance from the Mascarene Islands.
It's always interesting to watch how language evolves and sometimes it's a little disturbing. I honestly don't know how I should feel about 'tweet' making it in as the brand 'twitter' is based on the word 'twit'... see above link... but hey. Oxford says it's ok and twits and tweeters everywhere can now rejoice.