We pasted a map of Greater London onto cardboard, cut out the map, and then tried to balance it on a pin-head. The balance point, also known as the centre of gravity, can be said to be the geometric centre of London.
But for the update they went a bit more high tech.
Step forward Tom Hoban, who’s now refined the method and thinks he’s found the centre of London to much greater precision. Rather than using cardboard and scissors, Tom traced an electronic map in AutoCAD software. He was then able to find the shape’s centre of gravity digitally, removing the imprecision of our balancing-on-a-pin malarky.
I thought the ‘malarky’ of the pin and card was really nice. Very hands on. But it got me thinking about how easy it would be to work that out for other places. (that’s how my brain works)
Find the shape
The first challenge is finding the ‘shapes’ of a city to work with. In these days of data journalism and digital mapping, I wondered if that kind of ‘data’ existed and it does; kind of. There are plenty of data sets that offer shape files; the data needed to ‘draw’ the shape of a city or (more commonly) electoral ward, county or country. You see these a lot in visualizations of data like voting records etc. So it was just a case of finding one with about the right detail I needed.
As you expect these shapes are not uniform, they are polygons, so it took me a bit of Google work to find that the ‘center of gravity’ of a polygon is called it’s centroid.
In mathematics and physics, the centroid or geometric center of a two-dimensional region is, informally, the point at which a cardboard cut-out of the region could be perfectly balanced on the tip of a pencil, assuming uniform density and a uniform gravitational field.
So it was a bit of piecing together. I know you could easily map shape files using Google tools like Google Fusion tables etc. and I know that you can do some clever maths using scripting so the next step was to put it all together with more Google around ‘calculate the centroid of a polygon in Google maps‘. Which, by a country mile, is the most technical and intelligent sounding thing I’ve googled in the last 10 year.
Some time later…
Cutting a long Google very short, I ended up recognizing that doing it with Google maps was going to be hard – at least beyond my skills. But my searching revealed that there was some good mapping software or GIS available that might do the job. What’s that then…
A geographic information system (GIS) lets us visualize, question, analyze, interpret, and understand data to reveal relationships, patterns, and trends.
I ended up using QGIS, an open source mapping program that works on PC and Mac. I won’t lie, it’s a bit of bind to set up. But once it’s done you have a pretty powerful set of tools and one that would be worth a look for people doing a lot of mapping .
What’s great about QGIS is that once the ‘polygons’ are loaded in, it has a very neat menu item that calculates the centroids. Instant centers of all the areas on the map in one click!
Here’s a quick how-to:
Download and unzip the mapping data. If you look in the unzipped folder you’ll see a file with a .shp extension. That’s the one we want.
Click the Add Vector Layer button or pick Layer > Add Vector layer from the menu
Browser to the shape file (.shp) from the unzipped folder and open
A nice rendering of the shape file appears similar to the one below.
Make sure the layer you have created is selected and the select Vector > Geometry Tools > Polygon Centroids
The system offers a dialouge box. It wants to save the data as a new file. I saved mine in a new folder called centroids but you can put it where you like. Make sure you check the Add Results to Canvas option or you won’t see the centers. The result is something like:
That’s all the centroids calculated and plotted.
Getting the data on to a google map.
For a number of reasons I wanted to make sure I could share the results on a google map. One of the easier ways to to get any complex location data into a google map is to use Google Fusion tables. They play nicely with location information saved as a KML (Keyhole markup language) file.
QGIS makes short work of this.
Select the new layer with your centroids in
Select Layer > Save As
Pick Keyhole markup language KML from the Format option
Select a location and filename to save the content. Make sure you keep the .kml extension.
Repeat the process with the original layer (with the local authority areas on it)
The process to get the files in to Google Fusion tables is pretty easy. Here’s a slightly amended version of what Google suggests:
Go to Google Docs. Sign in to your Google Account or create a Google Account if you don’t already have one. (Note that you while can use a Google Apps for your Domain account for Fusion Tables, you will not be able to create maps.)
Click the “Create” button.
Click the “Connect more apps” bar at the bottom of the resulting list.
Type “fusion tables” in the “Search Apps” box and hit the “Enter” key.
Click the blue “+ CONNECT” button, then click the “OK” button in the confirmation dialog box.
Click “Create > Fusion Table (experimental)”.
In the Import new table dialog box, click “Choose File”.
Find the KML file you created from QGIS
Check that the data is formatted correctly and click “Next”.
Give your table a name and click “Finish”.
Once it’s imported you can click the Map tab and you’ll see the elements mapped (either the outlines of the areas or the dots that represent the centroids.
You can embed the map straight from google fusion tables like this
Once you have it on a map you can also take advantage of the satellite view and the Street view tool on google maps to get a good look at the center of your world.
This may all feel like a sledgehammer to crack a pointless nut! I guess it is. It’s a bit of fun that spiralled. The best I could say is that it falls in to my find a tool that answers a question methodology. But here’s some observations and what I learned along the way:
The center really does depend on the boundaries you pick. The picture at the start of this post is based on the Urban Audit of Greater cities boundaries for Preston (data). That’s different from the center that the Unitary and Borough boundaries throws up. (that’s in a field just near the M55 junction on the M6)
Picking the The Full Extent version of the files does skew things a little as it describes the shape of an area even if some of it stretches into the sea! So the methodology isn’t rock solid on a number of counts
There are lots of data sets to play with. Qgis means you could load loads up and compare.
Using QGIS ties you to the desktop – not great if you’re in newsroom with locked-down IT.
Using QGIS opened my eyes to the power of GIS software in general and how it could be part of a data journalist’s toolkit. But if you’re doing a lot of data mapping (rather than mapping) I do think something like Tableau is the better place to focus your time.
First off he cites ‘impact’. By impact he means the way that the digital landscape dilutes the impact of even the biggest stories. Much as it’s an interesting way of describing one of the negative effects of the networked environment. It’s concept that suggests news has a finite impact; With so many routes for news to travel it eventually, well, runs out of steam.
I think there is some merit in the idea – impact is neat phrase – but I do think it stems from a slightly institutional (print) perspective. What its really saying is the value to the producer is harder to conceptualise (and monetize). In that sense I don’t think that ‘news’ has any less impact. In some ways it can have more. We can’t really think that news is a stone to throw and then measure its success by the size of the splash! In a digital world we are measured against the ripples.
In that sense I think ‘reach’ is a better word than impact. Impact is the same mentality that still demands we recognise the word breaking or exclusive. The challenge of reach suggests there less of a problem with the ‘news’ and more a problem with the industry’s capacity to inhabit the broader landscape and connect with the audience.
That’s not to criticise Brock. He clearly sees the impact of impact!
Even if a single outlet has something big and releases it first, a scoop is not quite the event that it was. Partly because a revelation will spread a long way very fast and won’t be “broken” by being published to many people simultaneously at a set time. What we used to call “news” was once prepared like a conjuring trick or play behind a curtain and revealed at a fixed time; if it was big news, its release was an event.
Brock says that means ” ‘news’ is an idea which is being bent into a different shape.” I like that phrase. But I’d go a little further. News is broken as a concept. What news is has been changed (a factor Brock also identifies) But the news as an object (something we distribute) is also broken apart by the network. People chop it up and repurpose it for their own use.
I think what we are seeing is people, just by expectation and consumption habits rather than any discrete motivation, pushing against attempts by media organisation to control (own, whatever you like) the structure, purpose and shape and importantly the life-cycle of news. I see a lot of parallels with the Restart culture which wants to move beyond…
…the culture of constant upgrades and disposal, The Restart Project reconnects people with repair, preparing the ground for a future economy of maintenance and repair. We are supporting groups across the world which would like to replicate our community work.
People getting together to bring those dead electronic and electrical devices back to life. It’s all about sustainability and usefulness. Digital means people are beginning to restart news in the same way.
For me that presents interesting challenges the process of Brock’s suggestion that in making clear the ‘value of what journalist do’ journalists can:
insist that verification and investigation will not happen naturally in a ceaseless flow of data, conversation, gossip, rumour and manipulated misinformation; someone must make a choice to do these things and find the resources to support them. They can insist that big ideas depend on long passages of written words to spread and be debated. They can insist that a space to establish what is most likely to be accurate and true in the midst of what is now a marketplace for noise is something of value to democracies and worth fighting for.
I worry that we think we can/should insist anything. I think that’s less about insisting and more about (a) proving it and (b) about being connected enough for the community who might value that to be able to tell them.
Funding Journalism or the journalism industry
The last part that stood out was Brock’s assertion that:
While everyone thrashes around looking for a business model, philanthropy has a crucial role to play in bridging the gaps between a dying business model and a new one.
I bristle slightly at that one. The idea that large, profitable organisations should benefit from “charity” rubs a little. I know that in the context of this conversation (and others) we are using it in the ‘fourth estate’ context rather than ‘where did our profits go’ sense. But I don’t think you can split the two that easily. And anyway plenty of rich individuals seem prepared to invest in journalism already. That seems to have worked well over the years!
As I say Brock’s piece is worth a read. Definitely (as the post proves) food for thought. It’s a good stager for his presentation (if you’re lucky enough to be in Perugia) and an indicator that his book “out of print” is worth checking out.
But, and this is no criticism of George, but you have to wonder if the biggest factor that is going to change the shape of journalism next is not going to be the journalism industry.
I’ve been thinking alot about coding. Staring at some code for an hour and then realising that it’s not working because you spelt slider wrong will do that! So it was nice to see a piece on the Guardian website, Head of Cardiff J-school Richard Sambrook has been pondering the whole issue of journalism and coding. It made me think about how I learnt to code and I wanted to share that with you.
But before that, a brief detour to Richard. He starts with a question:
Do journalists need to learn computer code? It’s a question which has raised passionate debate in the US – with typically polarised responses. As yet in the UK it elicits little more than bemused curiosity. But it’s an increasingly important question as media adapts to the volatile requirements of digital technology and changing consumer expectations
The comments on the piece are also worth a read. They have the usual range of view from “whatever they do it won’t be proper coding” through “it’s cheaper to get someone else to do it” and out the other side of “don’t journalists have enough to do”.
I’m not sure whether the bemused curiosity is aimed at the question or the US debate. I’m very much in the camp that raises an eyebrow at the debate. There is no doubt the industry want it, as much as the industry want anything these days. As with data and social there are always going to be unicorns. But for me talking about journalists and coding is a moot point. It happens. Debating if it’s important seems to take time away from actually trying it.
It strikes me (and I know I’m not alone in this) that this is a problem of language rather than utility or necessity. Think about the debate that the phrase Citizen Journalism creates. (It’s OK I’ll wait while some of you stop shouting at the screen). Now imagine you call yourself a coder and then some journalist comes along and starts saying what they do is coding! That’s the debate.
The industry has co-opted coding as a shorthand for many, differing practices and we use it inconsistently (there is no ‘correct’ here) . Everything from a bit of HTML, using R to do data journalism and even doing a bit of hardware programming with your Raspberry Pi. Like many other things (data journalism etc.) its a reason to talk about other, more fundamental issues facing the industry. Coding isn’t a thing anymore. It’s a trope.
Sambrook’s article is a great example of that. Dig below the surface and he’s really aiming stuff towards a balance of the technical skills that are needed to get a more ‘scientific’ type of perspective. That’s a nod to the ‘precision journalism’ school of thought, one echoed in a comment by Liz Hannaford (whose blog is worth a look b.t.w).
My 5 steps to becoming a coder (for what it’s worth)
<h1> Andy's time machine </h1>
<p>Mix the old and new with Andy's time machine </p>
<!-- don't worry about the weird img src here-->
<!-- I'm using google drive to host the images-->
<!-- Replace these with the images you want to use-->
<!-- The modern image here -->
<img src="http://drive.google.com/uc?export=view&id=0B1Gp8j4WdzDgc0duVnpzZU5sS28" />
<!-- The old image here. -->
<div id ="old" class="overlay">
<img src="http://drive.google.com/uc?export=view&id=0B1Gp8j4WdzDgX0RNX3E0a3JzazQ" />
<p>Move the slider to go back in time</p>
Yes, I’ve been doing this a while so some of it has stuck and that helped speed up what I was searching for. But along the way I learned how to do loads of things that I’ve now forgotten. It did the job and I moved on.
Getting a job done.
Ok. It’s semi-serious advice and I’m definitely not saying that coding is easy. And in saying that I hope I’ve tempered any criticism that coders might imply from this post or any apparent perception that ‘I don’t get’ how busy journalists are. But the point for me is not that coding is any more or less useful than co-opting any process into your journalism process
The key is that you need to know what your journalism practice is. After that you can see what fits and what doesn’t. If the coding is too much then it’s about co-opting people in to the process.
Don’t learn ‘coding’ and look for a problem to solve. Find a problem and then ask if a bit of code might help. If the problem is too big find someone who can help.
That last part – engaging with people who could help is another good reason to dive-in, have a go and pick up a bit of the language. It’s like trying to learn a little bit of a foreign language for a holiday. People who speak it often appreciate the effort. Those who’ve invested some time learning this stuff like it when you make an effort to understand what they do – you know, a bit of journalistic empathy!
Whatever the motivation, on a very basic level I’d recommend giving coding a go. If you find yourself doing ( or really enjoying) lots of this stuff than actually learning a structured approach (like learning the piano rather than busking) will only enhance the process. But for me there is a really basic reason, if the right opportunity comes along. to have a go. When you press run or refresh or whatever you’re doing to make it go it’s actually quite a buzz when it works. There aren’t many things we make and do these days as part of our jobs that get such instant feedback.
It’s not a new thing, but it’s an easy way to add a bit of visual interest to archive by doing a ‘then and now’ kind of mashup where the past peeks through in the present image – like the example above. I think that, in general, they are a nice bit of content and one that plays to the strengths of a lot of media orgs who are sitting on massive archive.
So I set myself a few challenges. How easy is to recreate them using free tools and secondly could I add a bit more interactivity to them?
Mixing images – The ingredients
An archive “old” image and a current new image.
Access to Pixlr on the web
A word about the archive image
The archive image came from a local history site http://chorltonhistory.blogspot.co.uk/ and used with permission of the site owner Andrew Simpson (Thanks Andrew!). It’s taken from the LLoyd collection. It’s a great site a worth a look to show what depth and passion for local history there is to tap into.
I found the image, amongst others via a google image search and it’s worth mentioning just how important it is to click the Visit page option and not just the View image. Go to the site, find out more about how and who is using it. It was a very short process for me to get permission from (a very helpful) Andrew.
Armed with an archive image, I took the current image myslef. I took the new image with my iphone which also had a copy of the old image on it. I used that to get an approximate position and then took the pic. The most common of these kinds of archive mashup are often building. They are great as they give you clear lines and points of reference but railings, road kerbs and stuff like that can also act as hooks to get a good match for position.
In the toolbar down the left-hand-side of the window, click the Move tool Icon (first one on the right) or press the V key as a shortcut
Click on the “old” image
Select Edit > Select All of CTRL+A to select the whole image
Make sure you have the Move tool still active, press V to make sure.
Click on the “new” image
Click Edit > Paste
The old image is pasted as a new layer in the new image. Pixlr, like many image editing apps, mirrors the Photoshop model of being able to break an image down into layers so you can edit elements individually. The old image is pasted in as a new layer, Layer 2.
Matching up the image.
This next part is the creative (or depending on your view, fiddly) bit. You will need to use a number of tools to move the image around and and match the positioning. To make this easier we can change the transparency of the old image (the top layer) so we can see the new image below.
In the layers panel click the Layer 2 panel. I’d suggest clicking the image, clicking the text can put it in edit mode – you can change the name of the layer.
In the layer panel click the Toggle Layer properties button (it’s in the bottom Left-hand-side of the layers panel)
The panel expands with a slider for opacity. Try dragging the slider back until you get a good mix between the two images. You may find that you change this from time to time as you work with the image.
To move the image around to get it into generally the right position:
Make sure you have the Move tool still active, press V to make sure.
Click on the old image you pasted in and move it around to get an approximate position.
Make sure you have the Move tool still active, press V to make sure.
Make sure you have the Layer2 (old image layer) active
Click Edit > Free transform or press CTRL+T
Drag the blue boxes to change the width and height TIP: Holding down shift whilst you drag will maintain the aspect ratio
Hit Enter to commit the changes
Make sure you have the Move tool still active, press V to make sure.
Make sure you have the Layer2 (old image layer) active
Click Edit > Free transform or press CTRL+T
hold your mouse near one of the four corners, a small, circular arrow will appear. Click and drag and the image will rotate.
Hit Enter to commit the changes
Sometimes the perspective of an image isn’t quite right. You can apply some distortion to an image to try and fix this.
Note: This isn’t as fine tuned or flexible as in photoshop so it’s worth a little effort to get the original image framed well if you can.
Make sure you have the Move tool still active, press V to make sure.
Make sure you have the Layer2 (old image layer) active
Click Edit > Free distort
Drag the blue boxes to skew the image
Hit Enter to commit the changes
Getting a better look at the image
Whilst you are positioning or, as we will explore next, editing the image, it’s often useful to zoom in and move around an image to get a closer look at the detail you’re matching.
Click View > Actual Pixels to show the image full size
Use the red square in the Navigator panel to move the view around
If you need to get in closer use View > Zoom in
To push out use View > Zoom out
To return to seeing the whole image in the window use View > Show All
Deleting unwanted elements.
Once you have the image in about the right place you can begin removing the bits of the old image you don’t want. There are a mix of tools you can use to remove parts of the image with varying degrees of accuracy .
Make sure you have the Layer2 (old image layer) active
Click the Eraser tool icon or press E for the short cut
Select a size and shape for the tool in the Brush options across the top of the screen
Click and drag across the part of the image you want to delete.
The soft-edge brushes are often the best as they make for a nicer mix between the two and you’ll get a better result mixing sizes and zooming in to pick out detail.
You can use the Lasso tool to draw round content you want to delete. This is often good for cutting round cars or people. One way of working is setting the opacity of the old layer you can draw round objects in the new layer and ‘cut a shaped hole’ in the old layer. Like the image above.
Click the lasso tool or press L for the shortcut
If you want a soft-edge around the bit you cut out, set the Feather option. The value here will depend on the amount you’re removing. Try a large value (70 or so) for big chunks and smaller values for finer detail. Always set this first.
Click-and-hold and drag the tool around the part of the image you want to cut out.
Hit the Delete key
What if I make a mistake?
Pixlr has a history function which is really useful for a bit of trial and error. If you can’t see it then select View > History a few times to toggle it. To go back just scroll through the history and click on the last ‘good’ point.
Saving my image
When you’re happy with the result.
Select File > Save
Save the image to a location of your choice.
The images alone can make for a really interesting and engaging slideshow. But with a small amount of code, you can add a slider that lets the user mix between the two images. You can see an example of the code below. It’s not a complicated bit of code (I’m no coder). All it does is layer the old image over the new image and then change the transparency of the old image with the slider.
BTW You can get a little more detail on how my approach to coding works on this blog post about journalism coders.
This exercise was, as much as anything else,a reason to create a tutorial that I could add to a list that students can experiment with. The actual process is pretty straightforward if fiddly. But its a good chance to stretch the creative muscles and get useful content that we know plays well with an audience. It also helps reinforce the value of local knowledge and taking a step beyond a google image search. The chorlton history blog was not only a diverting and interesting find. If this was something I was going to do more of, contacting Andrew would be the first step in building a useful contact.
The code part is just me playing but, and I’m going to blog about this, don’t be put off trying.It’s not tricky code and tools like Codepen make pulling it together more structured. Feel free to fork and play and as always, comments and feedback always welcome.
It’s a subject that isn’t going away and it’s also one that generate a huge amount of debate – data journalism. If ever there was a perfect hook to hang all of journalisms best and worst it’s data journalism! But a recent flurry of tweets and a nice ‘there’s no reason not to try this stuff’ post from Matt Waite focussed on one part of the debate – how should we be doing more of this in our j-courses and who should be doing it at.
It was something that Matt kicked off with a tweet:
Signs there is work to do: Data journalism is hottest thing going. Offer data journalism course. Two students sign up. Two.
There is an interesting point in there about adjunct courses – essentially but not exclusively online courses – which I think is fair. There’s no better way to put journalists (and students) off than combining maths and computers!
As I said in my response, we do ‘data’ across all of our courses and I thought I’d share an example of the kind of intro practical stuff we are doing with first years (year one of three year degree). It’s done in the context of a broader intro to data and journalism and it’s developed and expanded throughout the three years (more so as we are shifting things around in the courses.) including a dedicated data journalism module.
My take at this stage is that data journalism is worth considering as part of a more structured approach to journalism. The students are no doubt fed up of my Process into content mantra.
Anyway. Two slideshows below are an intro – context lecture and the other is the related workshop. And, yes, I know there is a fair bit of visualization in there – charts and maps – which some data people can get quite sniffy about. We are careful to make the point that not all data is visual but I do think a visual output can be a quick win for capturing peoples interest. It’s just the start.
Again, these are just the slides, there is the usual amount of narrative and discussion that goes with this. They are presented as is:
The short answer is: You’d think it would be easy. Actually it’s a bit of a pain.
The first step is finding a way to make the image/text side of things,
Making nice images.
I tried a few apps to see if I could get that combination of editing (cropping and image manipulation) and text that I got from Pixlr.
A neat solution to the image manipulation and cropping came from Aviary. Their app has a neat crop tool and the image manipulation/filter tools are nice to play with. But Aviary’s text tools are pretty limited. You can add text but it’s limited by size and is always center aligned. Not quite what I want.
I also had a look at the Instagram focused end of the market. One app that I liked was AfterPhoto. It crops to a square ratio but the text tool is limited to one line at a time. What makes up for that limitation however is the ability to add ‘layers’ of text. Another option was Over. It shares a similar style of editing with Afterphoto but the text tools are pretty flexible. It’s not free though.
As it turned out Pixlr was also the solution to the problem on ipad as it was on the web, with it’s PixlrExpress app. Square cropping, nice text and image manipulation Well done Autodesk! The only thing to remember with PixlrExpress is apply all your filters etc. before you add text!
Being positive about it, you could say that you’re spoiled for choice when it comes to image editing apps on the ipad. You could range around and cherry-pick the nice fonts and filters from a number of them.
Making the video
There are surprisingly few, useful, free apps for video editing on ipads. ‘But wait a minute Andy’ you cry. ‘What about imovie’. Technically you could say that’s cheating anyway as it’s only free if you happen to own a swanky new ipad. The rest of us chumps paid for it! But it’s nice and swish.
Sadly it falls at the first hurdle. In Apple’s cuddly style it demands that any stills fill the screen and are animated to make them dynamic and interesting. Now I love a good Ken Burns effect as much as the next man but it’s not what we want here.
Another issues is that you can’t set the resolution of the video clip (you cant set a custom width and height) so any video produced would be cropped by instagram. iMovie Fail!
In terms of other video editing options, it’s slim pickings. There are a few free video editor that I tried but most failed when it came to keeping the images in the right resolution. Some did but watermarked the video. In one way that was less of a problem as instagram actually crops it out. But that’s not the most ethical or fair way to go.
The best solution I found was an app called Flipagram. A very neat app that will quickly build up a slideshow for you. It has the added bonus of allowing you to record your own narration. That could be a real plus-point for those looking to leverage the audio-slideshow style of narrative. The downside is that it does add a watermark.
And the result…
But what about adding video…
If you do want to mix video and images (and have both behave in terms of resolution) then, I’m afraid, you’re paying for an app. Even if you pay, as I said before, it’s slim pickings. The big problem, as far as recreating instafax goes, is that the text tools on most editing apps are risible.
If I had to recommend an app (and a workflow) it would be a combination of VideoCrop (free) and Pinnacle Studio.(£8.99). Use video crop to crop the video to the right format and then use Pinnacle to piece it together. Pinnacle respects the aspect ration of the video and images you use so any video you output should crop nicely in Instagram. Be prepared to wrangle with the tools though (especially text and the mystical composite setting). It’s a steep learning curve.
So it is possible to recreate my original experiment on an iPad using free tools. But the process underlined for me that the assumption that your iPad/smartphone/tablet, is a multimedia power house is pretty wide of the mark. Moving outside the TV box with video is a case of moving around apps. A combination of tools will get the job done but as with most things, money buys you flexibility.
That said, if image slideshows is your thing then the Pixlr/Flipagram combination is a winner in my books.
Last week I spent a very pleasant day at the Newsrewired conference in London. I was moderating a panel on short form video. It prompted a lot of thinking about what that actually was. But one example of what it could be was the BBC’s project Instafax. I’m still a bit skeptical as to whether this a ‘new form’ as much as a nice use of a platform. (I’ll maybe blog more about that issue)
Actually I’m just more impressed that orgs like the BBC, Channel 4 and The Guardian are experimenting with visual story telling online. They aren’t alone. A number of startups like NowThisNews are experimenting with using micro-video on platforms like vine and instagram to reach that much-desired mobile audience.
Anyway, above what I might think of the rhetoric around the experiments, I did think that it was an interesting idea to show to students. It struck me as a fun way to introduce images etc. and think about telling stories in different ways. So I set about working out a way to do instafax style video on the cheap (well, free).
One of the things that was clear in the panel discussion was how much a lot of orgs still rely on quite expensive kit and infrastructure to make video happen. (The key seems to be in getting your initial settings right) Now we aren’t short of kit at the uni but we do have some restrictions on the tools we can use and things we can install. So I was looking at a solution that was pretty much web-based and as universal as it could be.
So here it is:
Instafax on no budget.
Some nice images of news stories (make sure you have cleared their use before you start)
Access to an image editor. Photoshop and gimp are fine but in this recipe we will be using Pixlr.com
Access to a youtube account
An instragram account
A phone with the instagram app to upload your video.
cut-and-paste the image you want to use in to the image
Open up the image you want to use in your video.
Select the crop tool
Set the Constraint option to Output Size
Set the output Width and Height to 640px . Note. Be careful how you use this tool. The crop will resize to 640×640. If you highlight a small part of the image or your image was small to start with, it can ‘blow-up’ the selection and leave you with a blurry, pixel-ly image.
Use the text tool to add a suitable caption. It’s worth thinking about where you put your caption. It seems to be common practice to add a caption at the top or bottom but never in the middle of the image. I’m guessing that’s to avoid it being obscured by a play icon on some platforms.
Click the camera icon and click Add Photos to the project
Upload the images you created
Add the image to the timeline. Remember your video has to 15 seconds so stretch or minimize to fill. A guide of 4 seconds a slide is not a bad starting point. It depends on the amount of text.
When you’re done, publish the video
When the video has been processed go to your video manager (youtube.com/my_videos or click video manager on the video page)
Click the edit dropdown next to the video
Click Download MP4
The video looks something like…
Getting it on instagram
Copy the mp4 file to your device. Email is good or maybe dropbox would help here.
Upload using the instagram app as normal
When you add your video to Instragram, don’t forget the caption. You can get quite a lot in there are it works well as a kind of summary/intro/cue for the story.
As a process it’s a bit clumsy and the rendering up and down from youtube doesn’t leave the crisp edges that you would get from using better kit (or the whizzy transitions). But I think it does the job and with some music (which you could add using youtube’s own editor) I think it’s a viable, entry level way to explore image slideshows and mobile audiences.
What about adding video?
You can easily add video using the youtube editor but Instagram will crop the outer edges. So make sure you frame the video with the key elements in the middle. Also the youtube editor text tools are (very)very limited.
The big gap here is the ‘transfer to your phone’ bit. There is site called Gramblr that will allow you to upload from the desktop but it wants your username and password. If that’s a price you’re prepared to pay (and I’ve no reason to assume that it isn’t safe) then it’s a workable solution. But I think Dropbox or email is just as easy and if you use the native app to upload you get all the other stuff like tags etc.
I’m convinced there is always real value in playing around with platforms. It isn’t just geeky tinkering. As I said, fair play to organisations that are experimenting in the way the BBC are. For me, this was as much an exercise in something interesting for the students to try – exploring new platforms and playing with kit – as it was any attempt to prove it could be done. But I think, like slideshows, this is an opportunity for those with plenty of image s to explore new narrative styles.
Let me know what you think.
Oh and hey BBC! if you’re looking to drop the insta bit, how about something that sums up what it is. Facts that you can see. Maybe, seethefax…seefax…something like that.
This semester my Monday mornings are spend in the company of our Foundation Degree students and this morning I was talking about Online writing.
I spent a bit of time talking about headlines and how important they are in the digital world. They’re not just the start to a good article. Headlines are your online envoys; little text-tugboats content sailing around the web, pulling people back to your site.
I mentioned Jakbob Nielson’s often cited post on the BBC and their headlines – World’s Best Headlines: BBC News. At which one of the students seemed to crack: ‘It’s always the BBC. Whenever tutors give us a good example, it’s always the BBC!’
After a few moments of reflection, I kind of had to agree. (although I also assured him that I was pretty convinced that it wasn’t because we were being paid by the BBC!)
But, I asked him and the class, was it a problem? In general they didn’t thinks so but it’s given me pause for thought.
Best practice, common practice or best principles
One of the best things about the BBC’s online presence is that they are consistent. If I want to talk about writing tight headlines, I can reliably point to the BBC as a benchmark. Just as (often in the same lecture) I can point to the Daily Mail as a benchmark for engineering headlines for platforms, Buzzfeed for their social and video strategy; ft.com or ammp3d for their use of visualization etc.
In a world full of Buzzfeed and Upworthy headlines – fighting for social media eyeballs – I can see how the BBC might feel a little tame. In fact, compared to a lot of sites and the bright shiny toys of digital journalism , I suppose the BBC can seem pretty dull. But, for me, that consistency is a really valuable. It’s about first principles.
Doing journalism in a digital world is tricky. There is so much churn that I think finding some good, basic solid ground is quite a valuable thing. But what I would hate is that people learning it feel trapped by the a ‘learning the rules so you can break them’ approach. But I would also hate that people felt trapped by constantly having to do the next big thing.
Saying forget the basics exemplified by the likes of the BBC and load up on the cutting-edge responsive, mobile, data skills (or vice versa) is a mistake. Of course it’s also a false dichotomy. I’m pretty confident that across the board students get a chance to do both and getting students to reflect on and practice both is really valuable.
Reflecting on how I present that is just as valuable.
Am I holding them back? Is there a better way to sell the basics?
During the 2008 summer Olympics, the Beijing Air Track project took a team of photographers from Associated Press and used them to smuggle hand-held pollution sensors in to Beijing. Using their press access to the Olympic venues, they gathered pollution readings to test the Chinese government’s data that a series of extreme emergency measures put in place in the run-up to the games had improved the cities notoriously poor air quality. They were not the only organisation to use sensors in this way. The BBC’s Beijing office also used a hand-held sensor to test air pollution gathering data that appeared in a number of reports during the games.
“prime example of how sensors, data journalism, and old-fashioned, on-the-ground reporting can be combined to shine a new level of accountability on official reports”.
In contrast to the Chinese data, the level of transparency displayed in the way the data was collected vividly illustrates how sensors can play a part in reinforcing data journalism role in the process of accountability.
Testing the context, provenance and ownership – where our data comes from and why – is a fundamental part of the data journalism process. If we are not critical of the data we use (and those that provide it), perhaps becoming over-reliant on data press releases , we can risk undermining our credibility with data-churnalism or, worse still, data-porn! . As data journalism practice evolves, whilst the basic critical skills will remain fundamental, it would seem logical to explore ways that we reduce our dependency on other sources all together. The Beijing project, with its use of sensors, offers a compelling solution. As Javaun Moradi, product manager for NPR digital, succinctly put it:
“If stage 1 of data journalism was ‘find and scrape data.’, then stage 2 was ‘ask government agencies to release data’ in easy to use formats. Stage 3 is going to be ‘make your own data’”
The three stages that Moradi identifies are not mutually exclusive. Many data journalism projects already include an element of gathering new data often done using traditional forms of crowdsourcing; questionnaires or polls. As much as involving the audience has its benefits, it is notoriously unpredictable and time-consuming. But as individuals we already make a huge amount of data. That isn’t just data about us collected by others through a swipe of a loyalty card or by submitting a tax return online. It’s also data we collect about ourselves and the world around us.
An increasing number of us strap sensors to ourselves that track our health and exercise and the “internet of things” is creating a growing source of data from the buildings and objects around us. The sensors used by the AP team were specialist air pollution sensors that cost in excess of $400 – an expensive way for cash-strapped newsrooms to counter dodgy data. Since 2008 however, the price has dropped and the growing availability of cheap computing devices such as Raspberry Pi and Arduino and the collaborative and open source ethic of the hacker and maker communities, have lowered the barriers to entry. Now sensors, and the crowd they attract, are a serious option for developing data driven reporting.
Hunting for (real) bugs with data
In 2013, New York braced itself for an invasion. Every 17 years a giant swarm of cicadas descend on the East Coast. The problem is that exactly when in the year the insects will appear is less predictable. The best indicator of the emergence of the mega-swarm (as many as a billion cicadas in a square mile) seems to be when the temperature eight inches below the ground reaches 64 degrees (18C). So when John Keefe, WNYC’s senior editor for data news and journalism technology, met with news teams to look at ways to cover the story, he thought of the tinkering he had done with Arduino’s and Raspberry Pi’s . He thought of sensors.
Keefe could not find a source for the data that offered any level of local detail across the whole of New York. He took the problem of how to collect the data to a local hackathon, organised by the stations popular science show Radiolab, who helped create a “recipe” for an affordable, easy to make temperature sensor which listeners could build and send results back to a website where they would map the information
Whilst sensors play an enabling role in both examples, underpinning both the Beijing AirTrack and Cicada projects is the idea of collaboration. The Beijing project was originally developed by a team from the Spatial Information Lab at Columbia University. Combining the access of the media with the academic process and expertise of the lab gave the project a much bigger reach and authority. It’s a form of institutional collaboration that echoes in a small way in more recent projects such as The Guardian’s 2012’s Reading the riots. The Cicada project, on the other hand, offers an insight into a kind of community-driven collaboration that reflects the broader trend of online networks and the dynamic way groups form.
Safecast and the Fukushima nuclear crisis
On 9 March 2011, Joichi Ito was in Cambridge Massachusetts. He had travelled from Japan for an interview to become head of MIT’s prestigious Media Lab. The same day a massive underwater earthquake off the coast of Japan caused a devastating tsunami and triggered a meltdown at the Fukushima Dai-ichi nuclear plant, starting the worst nuclear crisis since Chernobyl in 1986. Ito, like many others, turned to the web and social media to find out if family and friends were safe and gather as much information as he could about the risk from radiation
At the same time as Ito was searching for news about his family, US web developer Marcelino Alvarez was in Portland scouring the web for information about the possible impact of the radiation on the US’s west coast. He decided to channel his “paranoia” and within 72 hours his company had created RDTN.org, a website aggregating and mapping information about the level of radiation .
For Alvarez and Ito the hunt for information soon developed into an effort to source geiger counters to send to Japan. Within a week of the disaster, the two had been introduced and RDTN.org became part of project that would become Safecast.org. As demand outstripped supply, their efforts to buy geiger counters quickly transformed into a community driven effort to design and build cheap, accurate sensors that could deployed quickly to gather up to date information.
SIDENOTE: It will be interesting to see how the experiences of Beijing and Safecast could come together in the coverage of the 2020 Olympics in Japan
Solving problems: Useful data and Purposed conversations
Examples such as WNYC’s cicada project show how a strong base of community engagement can help enable data-driven projects. But the Safecast network was not planned, it grew
“from purposed conversations among friends to full time organization gradually over a period of time”
There was no news conference to decide the when and the how it would respond or attempt to target contributors. It was a complex, self-selecting, mix of different motivations and passions that coalesced into a coherent response to solve a problem. It’s a level of responsiveness and scale of coverage that news organisations would struggle to match on their own. In that context, Moradi believes that journalism has a different role to play:
Whether they know it or not, they do need an objective third party to validate their work and give it authenticity. News organisations are uniquely positioned to serve as ethical overseers, moderators between antagonistic parties, or facilitators of open public dialogue
Taking a position as a “bridge” between those with data and resources and “the public who desperately want to understand the data and access it but need help” is a new reading of what many would recognise as a traditional part of journalism’s process and identity. The alignment of data journalism with the core principles of accountability and the purpose of investigative journalism, in particular, makes for a near perfect meeting point for the dynamic mix of like-minded hacks, academics and hackers, motivated not just by transparency and accountability. It also taps into a desire not just to highlight issues but begin to put in place solutions to problems. This mix of ideologies, as the WikiLeaks story shows , can be explosive but the output has proved invaluable in helping (re)establish the role of journalism in the digital space. Whether it is a catalyst to bring groups together, engage and amplify the work of others or a way, as Moradi puts it, to “advance the cause of journalism by means other than reporting” , sensor journalism seems to be an effective gateway to exploring these new opportunities
The digital divide
The rapid growth of data journalism has played a part in directing attention, and large sums of money, to projects that take abstract concepts like open government and “make them tangible, relevant and useful to real live humans in our communities”. It’s no surprise, then, that many of them take advantage of sensors and their associated communities to help build their resources. Innovative uses of smart phones, co-opting the internet of things or using crowd funded sensor project like the Air quality egg. But a majority of the successful data projects funded by organisations such as the Knight Foundation, have outputs that are almost exclusively digital; apps or data dashboards. As much as they rely on the physical to gather data, the results remain resolutely trapped in the digital space.
“We are at a tipping point in relation to the on-line world. It is moving from conferring advantage on those who are in it to conferring active disadvantage on those who are without”
The solution to this digital divide is to focus on getting those who are not online connected. As positive as this is, it’s a predictably technological deterministic solution to the problem that critics say conflates digital inclusion with social inclusion . For journalism, and data journalism in particular, it raises an interesting challenge to claims of “combating information asymmetry” and increasing the data literacy of their readers on a mass scale .
Insight journalism: Journalism as data
In the same year as Digital Britain report appeared, the Bespoke project dived into the digital divide by exploring ways to create real objects that could act as interfaces to the online world. The project took residents from the Callon and Fishwick areas in Preston, Lancashire, recognised as some of the most deprived areas in the UK, and trained them as community journalists who contributed to a “hyperlocal” newspaper that was distributed round the estate. The paper also served as a way of collecting “data” for designers who developed digitally connected objects aimed at solving problems identified by the journalists. A process the team dubbed insight journalism .
One example, the Wayfinder, was a digital display and a moving arrow which users could text to point to events happening in the local area.
Another, Viewpoint was a kiosk, placed in local shops that allowed users to vote on questions from other residents, the council and other interested parties. The questioner had to agree that they would act on the responses they got, a promise that was scrutinised by the journalists.
The idea was developed during the 2012 Unbox festival in India, when a group of designers and journalists applied the model of insight journalism to the issue of sexual harassment on the streets of New Delhi. The solution, built on reports and information gathered by journalists, was to build a device that would sit on top of one of the many telegraph poles that clutter the streets attracting thousands of birds. The designers created a bird table fitted with a bell. When a woman felt threatened or was subjected to unwanted attention she could use Twitter to “tweet” the nearest bird table and a bell would ring. The ringing bell would scatter any roosting birds giving a visible sign of a problem in the area. The solution was as poetic as it was practical, highlighting not just the impact of the physical but the power of journalism as data to help solve a problem.
Stage four: Make data real
Despite its successes sensor journalism is still a developing area and it is not yet clear if it will see any growth beyond the environmental issues that drive many of the examples presented here. Like data journalism, much of the discussion around the field focuses on the new opportunities it presents. These often intersect with equally nascent but seductive ideas such as drone journalism. More often than not, though, they bring the discussion back to the more familiar ground of the challenges of social media, managing communities and engagement.
As journalism follows the mechanisms of the institutions it is meant to hold to account into the digital space, it is perhaps a chance to think about how data journalism can move beyond simply building capacity within the industry, providing useful case studies. Perhaps it is a way to help journalism re-connect to the minority of those in society who, by choice or by circumstance, are left disconnected.
Thinking about ways to make the data we find and the data journalism we create physical, closes a loop on a process that starts with real people in the real world. It begins to raise important questions about what journalism’s role should be in not just capturing the problems and raising awareness but also creating solutions. In an industry struggling to re-connect, it maybe also starts to address the issue of solving the problem placing journalism back in the community and making it sustainable. Researchers reflecting on the Bespoke project noted that:
“elements of the journalism process put in place to inform the design process have continued to operate in the community and have proven to be more sustainable as an intervention than the designs themselves”
If stage three is to make our own data, perhaps it is time to start thinking about stage four of data journalism and make data real.
Alba, Davey (2013) Sensors: John Keefe and Matt Waite on the current possibilities, Tow Centre for Digital Journalism, 5 June. Available online at http://towcenter.org/blog/sensors-john-keefe-and-matt-waite-on-the-current-possibilities/, accessed on 12 August 2013 Alvarez, Marcelino (2011) 72 Hours from concept to launch: RDTN.org, Uncorked Words, 21 March. Available online at http://uncorkedstudios.com/2011/03/21/72-hours-from-concept-to-launch-rdtn-org/, accessed on 12 August 2013 Ashton, Kevin (2009) That “Internet of Things” thing, RFiD Journal 22 pp 97-114. Available online at http://www.rfidjournal.com/articles/view?4986, accessed on 25 September, 2013 Department of Business Innovation and Skills (2009) Digital Britain: Final Report, Stationery Office BBC (2008) In pictures: Beijing pollution-watch, BBC News website, 24 August. Available online at http://news.bbc.co.uk/sport1/hi/front_page/6934955.stm, accessed on 12 August 2013 Blum-Ross, Alicia, Mills, John, Egglestone, Paul and Frohlich, David (2013) Community media and design: Insight journalism as a method for innovation, Journal of Media Practice, Vol. 14, No 3, 1 September pp 171-192 Bradshaw, Paul. and Brightwell, Andy. (2012) Crowdsourcing investigative journalism: Help me Investigate: A case study, Siapera, Eugenia and Veglis, Andreas (eds) The Handbook of Global Online Journalism, London: John Wiley & Sons pp 253-271 Ellison, Sarah (2011) The man who spilled the secrets, Vanity Fair, February. Available online at http://www.vanityfair.com/politics/features/2011/02/the-guardian-201102 , accessed on 13 September 2013 Gray, Jonathan, Chambers, Lucy and Bounegru, Liliana (2012) The Data Journalism Handbook. O’Reilly. Free version available online at http://datajournalismhandbook.org/ Howard, Alex (2013) Sensoring the news, O’Reilly Radar, 22 March. Available at http://radar.oreilly.com/2013/03/sensor-journalism-data-journalism.html, accessed on 12 August 2013 Kalin, Sari (2012) Connection central. MIT news magazine, 21 August. Available at http://www.technologyreview.com/article/428739/connection-central/, accessed on 22nd August 2013 Knight, Megan (2013) Data journalism: A preliminary analysis of form and content. A paper delivered to the International Association for Media and Communication Research, 25-29 June, Dublin Livingstone, Sonia and Lunt, Peter (2013) Ofcom’s plans to promote “participation”, but whose and in what? LSE Media Policy Project, 27 February. Available online at http://blogs.lse.ac.uk/mediapolicyproject/2013/02/27/ofcoms-plans-to-promote-participation-but-whose-and-in-what/, accessed on 23 September 2013 Moradi, Javaun (2011) What do open sensor networks mean for journalism?, Javaun’s Ramblings, 16 December 16. Available online at http://javaunmoradi.com/blog/2011/12/16/what-do-open-sensor-networks-mean-for-journalism/#sthash.yXXlHoa2.dpuf, accessed on 9 August 2013 Oliver, Laura (2010) UK government’s open data plans will benefit local and national journalists, Journalism.co.uk, 1 June. Available online at http://www.journalism.co.uk/news/uk-government-039-s-open-data-plans-will-benefit-local-and-national-journalists/s2/a538929/, accessed on 12 August 2013 Rogers, Simon. (2011) Facts are Sacred: The Power of Data (Guardian shorts), Cambridge, UK: Guardian Books Safecast History (no date) Safecast.com. Available online at http://blog.safecast.org/history/, accessed on 25 September 2013 Sopher, Christopher (2013) How can we harness data and information for the health of communities?, Knight Foundation, 16 August. Available online at https://www.newschallenge.org/challenge/healthdata/brief.html accessed on 10 September 2013. Taylor, Nick, Marshall, Justin, Blum-Ross, Alicia., Mills, John, Rogers, Jon, Egglestone, Paul, Frohlich, David M., Wright, Peter, Olivier, Patrick (2012) Viewpoint: Empowering Communities with Situated Voting Devices, Proc. CHI 2012 pp 1361-1370, New York: ACM (don’t understand this reference) Taylor, Nick, Wright, Peter, Olivier, Patrick and Cheverst, Kieth (2013) Leaving the wild: lessons from community technology handovers. in CHI ’13 (don’t understand this reference) Waite, Matt. (2013) How sensor journalism can help us create data, improve our storytelling, Poynter.org. 17 April. Available online at http://www.poynter.org/how-tos/digital-strategies/210558/how-sensor-journalism-can-help-us-create-data-improve-our-storytelling/, accessed on 28 August 2013