Monthly Archives: April 2014

Ideas surrounding surveillance pervade strongly within our society. With the advent of the internet and all the complications and possibilities it brought in terms of communication, access and expression also came the possibility that the information we used being used by those in power or the providers of access. Websites like Google store thousands of billions of bits of user data creating detailed and diverse user profiles that customise their products and experiences. The Government too has access to a lot of this sort of personal user data with the US government being the most famous recent example of how leadership could use an arm of their organisation (the NSA) to monitor not just their own citizens but the citizens of many nations internationally. The weirdest thing about this new dimension of always on, always aggregating internet surveillance I’ve found is that there’s a good deal of ambivalence as to the surveillance activity actually happening. My own brother, when I brought this topic up just said to me “What does it matter? It’s for security, I don’t do anything illegal”. It’s an interesting point of view that I think is tempting to agree with but as Evgeny Morozov points out, It actually matters a great deal.

Morozov’s article is an extensive breakdown of why the concept of privacy matters today. One of his most resonate points is the one that directly addresses people like my brother – that the debate surrounding privacy must be politicised. My brother might say “But I don’t care about politics” but this isn’t the point here, it’s not about which side you’re on. This debate is in fact about bringing concerns of privacy into a wider scope articulating the political consequences regardless of their direct effect on our lives. To put it in a super blunt way – it’s not all about us and protecting our own butts from the law, its thinking about the greater political ramifications of influence that collecting private data may have.

Morozov after this tends to get a pit provocative in his views regarding privacy. They kickstart the debate by bringing back consideration of the political but seem a little antagonistic in their wording. He says we need to “sabotage the system” and reconsider “our fixed preconceptions about how our digital services work and interconnect”.  This sort of ‘throw out everything we know’ tone doesn’t matter so much when he brings it back to this idea of democracy being more important than the matters of privacy in terms of public debate. Other authors like Daniel Solove, while still helpful in their explanations of how things like the NSA scandal are relevant to privacy, do not focus enough on the bigger picture for the layman to care. I mean the aggregation effect for instance is interesting but why should the average and (mostly) law abiding citizen care about the compilation of data to create an image of himself?

There’s a film from a few years back (12 now WHAT? 2002 was 12 years ago?), based on a science fiction novel by Phillip K. Dick called Minority Report. This film has all this stuff about precognition but that’s not important to this debate about surveillance. The film does however paint a vivid picture of a realistic extension of modern surveilling technologies. Ads are customised to Tom Cruise’s character as he walks past them in the train station and his iris is scanned as a mechanism of ‘ticketing’ on board the train itself. Here’s where Morozov’s consideration of the political becomes relevant again. The police use this to control crime, to chase suspects and in this case an innocent one. Ads and transport information are providing aggrevated information to the police for the purpose of subtle control. It’s not the fact that he’s committed a crime, it’s the fact that they can use all this information to assume control and impart the belief that someone may have committed a crime. I think that’s where my brother and others who don’t care are going wrong. They’re missing the point that they don’t have to necessarily have committed a crime for surveillance to be a concern. If they like their freedom, they should care.






Yep, so after spending a little while trying to wrap my head around what all these words like vectors and frames meant in the context of our media consumption everything started to click. So obviously there are the really common debates between frames like that in the music industry or the journalistic practice but I’d like to go to my age old favourite topic of discussion the movies.

Movies have a very similar framing divide that pits the “capitalist” studios against the sharing and caring “pirates”. Film and TV piracy is reaching record highs lately, just see the season 4 premiere of Game of Thrones, which saw at one time more than 120,000 people seeding the show on torrent websites and that doesn’t even begin to include illegal streams and further copies of those copies. Millions of people watched this show at once and it’s all because of this process of framing. Piracy is framed as something that is mostly ‘okay’ for the general public, this framing is defined and generated by the general public only however (you’re not gonna see HBO being cool with Game of Thrones being pirated). Actually I’m going to stop here for a second and say this is where frames get confusing and start to interrelate. Because the chief executive of HBO did in fact come out and ‘support’ piracy, stating that it didn’t affect the shows revenue. Gregory Bateson seems to be the one referring to this idea of frames being contradictory or paradoxical as it is here. It does no good to think of the debate of piracy as these hard and fast frames that have no communication between each other.

The framing of modern film and tv piracy is social, psychological (in that it involves a lot of group validation) and temporal. It’s a product of its time in that right now demand for the instant is stronger than it’s ever been. This is where the vectors come in attempting to perhaps negotiate the relationship between the pirate and corporate scenes of film distribution. Netflix is a popular example of the corporate controlled vector trying to negotiate between frames. It offers much of the instantaneous popularity of illegal downloading while still allowing copyright holders to receive payment through monthly membership fees. People are willing to give it a shot however, because it offers a service similar to the one they can get for free, without the complications of illegality and the convenience of watching content instantly on any device without any tricky file conversion. The Netflix app is everywhere! Not that I would know, it’s not available here in Australia.  

Lakoff and Johnson however go even simpler than my little illustration here. Or maybe its more in depth depending on how you think of it. But they describe frames as not just patterns of thought but as “structures of feeling” that literally allow us to create and understand our reality. They use the example of chairs (P.116) to illustrate how the image of the chair has an intentionality that allows us to implicitly know what it is just by the images it conjures up. So yeah frames are broader than just sort of sides in an argument, nearly everything is part of frame

Frames bring together a range of experiences and in the process giving things like going to the movies or going to a restaurant a ‘reality’. They are a way of negotiating our own experiences.  


Time Warner, Inc. CEO admits Game of Thrones piracy is good for HBO

Lakoff, George and Johnson, Mark (1999) ‘The Efficacious Cognitive
Unconscious’ in Philosophy in the Flesh: The Embodied Mind and its Challenge to
Western Thought, New York: Basic Books: 115-117

The development of technological systems like computers is leading to many changes for society. The one that has fascinated me the most is the ramifications of automation and what it means for our relationship to reality.

The rise of robots as both analogue and replacement for humans is a complicated area. Straight away I’m reminded of this a bit awful movie from a few years back Surrogates. It stars Bruce Willis who plays a cop or something in  a world where everyone just stays at home in a chair while they remotely operate an attractive and physically unhindered robot of themselves (Here’s the trailer). Besides the questions this raises regarding why someone would want their robot to look like Bruce Willis it helps to illustrate what Sarah Gardner and to a certain extent Ben Eltham wrote about in their articles. Eltham discusses the profound social implications and the explosive politics of automation. He posits that with automation we’re switching from a labour focused industry to a capital focused industry and that the real winners of this change are going to be a small selection of millionaires. It’ll significantly change the relationship between employers and labour because the traditional labour will be non-existent, and these multimillionaire media conglomerates will have even more vertical control of their businesses. Top to bottom control will mean that humans will have to reconfigure their relationships to identity and their relationships to society.

Beyond just money our jobs define us in the modern world as a significant part of our identity. Eltham mentions our social circles, our goals and human relationships are all defined by our place and line of work. I don’t think this has to be approached with as much scepticism as Eltham brings to it though, humans have constantly redefined their relationships to each other over time and while it’s impossible to predict with accuracy what automation will do to us I think we’ll work out ways to give ourselves purpose and maintain our human relationships. Probably in a ‘Surrogates esque’ fashion where machines come to define us and act as analogue representations of our personalities, wants and desires. Sarah Gardner’s article shows us a potential version of these surrogates that’s coming in our future. Robots with behaviour based intelligence that can be controlled by humans but learn of their own accord. The article briefly mentions that this process of automation only really poss a threat for now to those who are uneducated and in skills based industries. I’m more concerned however in what this means for conversation and communication. The robots Gardner mentions can be programmed ‘drunk and one handed’ and if we don’t need to explain or train any more what kidn of cultural damage will we be doing to ourselves long term.

Now, I don’t know if I’m reading the work of Paul Dourish correctly but he covers a whole history of human computer interaction. One of the most interesting tidbits out of his writing was that all computer advancements have stemmed from an “expansion of human skills and abilities” (P.17). He suggests that tangible and social computing are based on the same social principles. Automation, especially through human analogues (and even moreso ones we control) exploits our ‘regular’ daily human interaction and adds a new medium through which these relationships are negotiated.

So what happens when we have nothing to do because automation? Well Surrogates (side note: I never thought I’d be using this a bit garbage movie as an example of anything, luck it has good ideas)suggests that people will just take their attractive robots and party and have sex with each other so that could be it. If Eltham is right however, the future looks bleak. Gardner lends us a little bit of hope though, we might be in control more than we think, acting as a check and even building these robots. Of course this could definitely change, especially for the uneducated. For the short term everyone’s job is safe.


Dourish, Paul (2004) ‘A History of Interaction’, in Where the Action Is: The
Foundations of Embodied Interaction, Cambridge, MA: MIT Press: 1-23.

Eltham, Ben (2014) ‘Robots want to take your job’, New Matilda, February
13, <;

Gardner, Sarah (2013) ’A Robot for Every Job’,, February
18, <;