It’s a subject that isn’t going away and it’s also one that generate a huge amount of debate – data journalism. If ever there was a perfect hook to hang all of journalisms best and worst it’s data journalism! But a recent flurry of tweets and a nice ‘there’s no reason not to try this stuff’ post from Matt Waite focussed on one part of the debate – how should we be doing more of this in our j-courses and who should be doing it at.
It was something that Matt kicked off with a tweet:
Signs there is work to do: Data journalism is hottest thing going. Offer data journalism course. Two students sign up. Two.
— Matt Waite (@mattwaite) April 9, 2014
Quite a few people pitched in (an assortment of tweets below):
There is an interesting point in there about adjunct courses – essentially but not exclusively online courses – which I think is fair. There’s no better way to put journalists (and students) off than combining maths and computers!
As I said in my response, we do ‘data’ across all of our courses and I thought I’d share an example of the kind of intro practical stuff we are doing with first years (year one of three year degree). It’s done in the context of a broader intro to data and journalism and it’s developed and expanded throughout the three years (more so as we are shifting things around in the courses.) including a dedicated data journalism module.
My take at this stage is that data journalism is worth considering as part of a more structured approach to journalism. The students are no doubt fed up of my Process into content mantra.
Anyway. Two slideshows below are an intro – context lecture and the other is the related workshop. And, yes, I know there is a fair bit of visualization in there – charts and maps – which some data people can get quite sniffy about. We are careful to make the point that not all data is visual but I do think a visual output can be a quick win for capturing peoples interest. It’s just the start.
Again, these are just the slides, there is the usual amount of narrative and discussion that goes with this. They are presented as is:
Let me know what you think if you get a chance.
Earlier in the week I wrote a post about making instafax style video using free stuff online. A few commentators, on and off the site, suggested that this would be the kind of thing you could do with your ipad. So I thought I would take a look.
The short answer is: You’d think it would be easy. Actually it’s a bit of a pain.
The first step is finding a way to make the image/text side of things,
Making nice images.
I tried a few apps to see if I could get that combination of editing (cropping and image manipulation) and text that I got from Pixlr.
A neat solution to the image manipulation and cropping came from Aviary. Their app has a neat crop tool and the image manipulation/filter tools are nice to play with. But Aviary’s text tools are pretty limited. You can add text but it’s limited by size and is always center aligned. Not quite what I want.
I also had a look at the Instagram focused end of the market. One app that I liked was AfterPhoto. It crops to a square ratio but the text tool is limited to one line at a time. What makes up for that limitation however is the ability to add ‘layers’ of text. Another option was Over. It shares a similar style of editing with Afterphoto but the text tools are pretty flexible. It’s not free though.
As it turned out Pixlr was also the solution to the problem on ipad as it was on the web, with it’s PixlrExpress app. Square cropping, nice text and image manipulation Well done Autodesk! The only thing to remember with PixlrExpress is apply all your filters etc. before you add text!
Being positive about it, you could say that you’re spoiled for choice when it comes to image editing apps on the ipad. You could range around and cherry-pick the nice fonts and filters from a number of them.
Making the video
There are surprisingly few, useful, free apps for video editing on ipads. ‘But wait a minute Andy’ you cry. ‘What about imovie’. Technically you could say that’s cheating anyway as it’s only free if you happen to own a swanky new ipad. The rest of us chumps paid for it! But it’s nice and swish.
Sadly it falls at the first hurdle. In Apple’s cuddly style it demands that any stills fill the screen and are animated to make them dynamic and interesting. Now I love a good Ken Burns effect as much as the next man but it’s not what we want here.
Another issues is that you can’t set the resolution of the video clip (you cant set a custom width and height) so any video produced would be cropped by instagram. iMovie Fail!
In terms of other video editing options, it’s slim pickings. There are a few free video editor that I tried but most failed when it came to keeping the images in the right resolution. Some did but watermarked the video. In one way that was less of a problem as instagram actually crops it out. But that’s not the most ethical or fair way to go.
The best solution I found was an app called Flipagram. A very neat app that will quickly build up a slideshow for you. It has the added bonus of allowing you to record your own narration. That could be a real plus-point for those looking to leverage the audio-slideshow style of narrative. The downside is that it does add a watermark.
And the result…
But what about adding video…
If you do want to mix video and images (and have both behave in terms of resolution) then, I’m afraid, you’re paying for an app. Even if you pay, as I said before, it’s slim pickings. The big problem, as far as recreating instafax goes, is that the text tools on most editing apps are risible.
If I had to recommend an app (and a workflow) it would be a combination of VideoCrop (free) and Pinnacle Studio.(£8.99). Use video crop to crop the video to the right format and then use Pinnacle to piece it together. Pinnacle respects the aspect ration of the video and images you use so any video you output should crop nicely in Instagram. Be prepared to wrangle with the tools though (especially text and the mystical composite setting). It’s a steep learning curve.
So it is possible to recreate my original experiment on an iPad using free tools. But the process underlined for me that the assumption that your iPad/smartphone/tablet, is a multimedia power house is pretty wide of the mark. Moving outside the TV box with video is a case of moving around apps. A combination of tools will get the job done but as with most things, money buys you flexibility.
That said, if image slideshows is your thing then the Pixlr/Flipagram combination is a winner in my books.
Let me know what you think.
Last week I spent a very pleasant day at the Newsrewired conference in London. I was moderating a panel on short form video. It prompted a lot of thinking about what that actually was. But one example of what it could be was the BBC’s project Instafax. I’m still a bit skeptical as to whether this a ‘new form’ as much as a nice use of a platform. (I’ll maybe blog more about that issue)
Actually I’m just more impressed that orgs like the BBC, Channel 4 and The Guardian are experimenting with visual story telling online. They aren’t alone. A number of startups like NowThisNews are experimenting with using micro-video on platforms like vine and instagram to reach that much-desired mobile audience.
Anyway, above what I might think of the rhetoric around the experiments, I did think that it was an interesting idea to show to students. It struck me as a fun way to introduce images etc. and think about telling stories in different ways. So I set about working out a way to do instafax style video on the cheap (well, free).
One of the things that was clear in the panel discussion was how much a lot of orgs still rely on quite expensive kit and infrastructure to make video happen. (The key seems to be in getting your initial settings right) Now we aren’t short of kit at the uni but we do have some restrictions on the tools we can use and things we can install. So I was looking at a solution that was pretty much web-based and as universal as it could be.
So here it is:
Instafax on no budget.
- Some nice images of news stories (make sure you have cleared their use before you start)
- Access to an image editor. Photoshop and gimp are fine but in this recipe we will be using Pixlr.com
- Access to a youtube account
- An instragram account
- A phone with the instagram app to upload your video.
Making the image
- Open up a new image in Pixlr.
- Set the width and height to 640pixels.
- cut-and-paste the image you want to use in to the image
- Open up the image you want to use in your video.
- Select the crop tool
- Set the Constraint option to Output Size
- Set the output Width and Height to 640px . Note. Be careful how you use this tool. The crop will resize to 640×640. If you highlight a small part of the image or your image was small to start with, it can ‘blow-up’ the selection and leave you with a blurry, pixel-ly image.
- Use the text tool to add a suitable caption. It’s worth thinking about where you put your caption. It seems to be common practice to add a caption at the top or bottom but never in the middle of the image. I’m guessing that’s to avoid it being obscured by a play icon on some platforms.
- Save the image(s) as a png file
Making the video
- Open the youtube.com/editor
- Click the camera icon and click Add Photos to the project
- Upload the images you created
- Add the image to the timeline. Remember your video has to 15 seconds so stretch or minimize to fill. A guide of 4 seconds a slide is not a bad starting point. It depends on the amount of text.
- When you’re done, publish the video
- When the video has been processed go to your video manager (youtube.com/my_videos or click video manager on the video page)
- Click the edit dropdown next to the video
- Click Download MP4
The video looks something like…
Getting it on instagram
- Copy the mp4 file to your device. Email is good or maybe dropbox would help here.
- Upload using the instagram app as normal
When you add your video to Instragram, don’t forget the caption. You can get quite a lot in there are it works well as a kind of summary/intro/cue for the story.
As a process it’s a bit clumsy and the rendering up and down from youtube doesn’t leave the crisp edges that you would get from using better kit (or the whizzy transitions). But I think it does the job and with some music (which you could add using youtube’s own editor) I think it’s a viable, entry level way to explore image slideshows and mobile audiences.
What about adding video?
You can easily add video using the youtube editor but Instagram will crop the outer edges. So make sure you frame the video with the key elements in the middle. Also the youtube editor text tools are (very)very limited.
The big gap here is the ‘transfer to your phone’ bit. There is site called Gramblr that will allow you to upload from the desktop but it wants your username and password. If that’s a price you’re prepared to pay (and I’ve no reason to assume that it isn’t safe) then it’s a workable solution. But I think Dropbox or email is just as easy and if you use the native app to upload you get all the other stuff like tags etc.
I’m convinced there is always real value in playing around with platforms. It isn’t just geeky tinkering. As I said, fair play to organisations that are experimenting in the way the BBC are. For me, this was as much an exercise in something interesting for the students to try – exploring new platforms and playing with kit – as it was any attempt to prove it could be done. But I think, like slideshows, this is an opportunity for those with plenty of image s to explore new narrative styles.
Let me know what you think.
Oh and hey BBC! if you’re looking to drop the insta bit, how about something that sums up what it is. Facts that you can see. Maybe, seethefax…seefax…something like that.
This semester my Monday mornings are spend in the company of our Foundation Degree students and this morning I was talking about Online writing.
I spent a bit of time talking about headlines and how important they are in the digital world. They’re not just the start to a good article. Headlines are your online envoys; little text-tugboats content sailing around the web, pulling people back to your site.
I mentioned Jakbob Nielson’s often cited post on the BBC and their headlines – World’s Best Headlines: BBC News. At which one of the students seemed to crack: ‘It’s always the BBC. Whenever tutors give us a good example, it’s always the BBC!’
After a few moments of reflection, I kind of had to agree. (although I also assured him that I was pretty convinced that it wasn’t because we were being paid by the BBC!)
But, I asked him and the class, was it a problem? In general they didn’t thinks so but it’s given me pause for thought.
Best practice, common practice or best principles
One of the best things about the BBC’s online presence is that they are consistent. If I want to talk about writing tight headlines, I can reliably point to the BBC as a benchmark. Just as (often in the same lecture) I can point to the Daily Mail as a benchmark for engineering headlines for platforms, Buzzfeed for their social and video strategy; ft.com or ammp3d for their use of visualization etc.
In a world full of Buzzfeed and Upworthy headlines – fighting for social media eyeballs – I can see how the BBC might feel a little tame. In fact, compared to a lot of sites and the bright shiny toys of digital journalism , I suppose the BBC can seem pretty dull. But, for me, that consistency is a really valuable. It’s about first principles.
Doing journalism in a digital world is tricky. There is so much churn that I think finding some good, basic solid ground is quite a valuable thing. But what I would hate is that people learning it feel trapped by the a ‘learning the rules so you can break them’ approach. But I would also hate that people felt trapped by constantly having to do the next big thing.
Saying forget the basics exemplified by the likes of the BBC and load up on the cutting-edge responsive, mobile, data skills (or vice versa) is a mistake. Of course it’s also a false dichotomy. I’m pretty confident that across the board students get a chance to do both and getting students to reflect on and practice both is really valuable.
Reflecting on how I present that is just as valuable.
Am I holding them back? Is there a better way to sell the basics?
The following is an edited version of a chapter I contributed to a new book Data Journalism: Mapping the Future, published by Abramis academic publishing. The fact that I’m in it aside, I can heartily recommend it as a great mix of practical and contextual information. Go and buy one. Go on!
During the 2008 summer Olympics, the Beijing Air Track project took a team of photographers from Associated Press and used them to smuggle hand-held pollution sensors in to Beijing. Using their press access to the Olympic venues, they gathered pollution readings to test the Chinese government’s data that a series of extreme emergency measures put in place in the run-up to the games had improved the cities notoriously poor air quality. They were not the only organisation to use sensors in this way. The BBC’s Beijing office also used a hand-held sensor to test air pollution gathering data that appeared in a number of reports during the games.
“prime example of how sensors, data journalism, and old-fashioned, on-the-ground reporting can be combined to shine a new level of accountability on official reports”.
In contrast to the Chinese data, the level of transparency displayed in the way the data was collected vividly illustrates how sensors can play a part in reinforcing data journalism role in the process of accountability.
Testing the context, provenance and ownership – where our data comes from and why – is a fundamental part of the data journalism process. If we are not critical of the data we use (and those that provide it), perhaps becoming over-reliant on data press releases , we can risk undermining our credibility with data-churnalism or, worse still, data-porn! . As data journalism practice evolves, whilst the basic critical skills will remain fundamental, it would seem logical to explore ways that we reduce our dependency on other sources all together. The Beijing project, with its use of sensors, offers a compelling solution. As Javaun Moradi, product manager for NPR digital, succinctly put it:
“If stage 1 of data journalism was ‘find and scrape data.’, then stage 2 was ‘ask government agencies to release data’ in easy to use formats. Stage 3 is going to be ‘make your own data’”
The three stages that Moradi identifies are not mutually exclusive. Many data journalism projects already include an element of gathering new data often done using traditional forms of crowdsourcing; questionnaires or polls. As much as involving the audience has its benefits, it is notoriously unpredictable and time-consuming. But as individuals we already make a huge amount of data. That isn’t just data about us collected by others through a swipe of a loyalty card or by submitting a tax return online. It’s also data we collect about ourselves and the world around us.
An increasing number of us strap sensors to ourselves that track our health and exercise and the “internet of things” is creating a growing source of data from the buildings and objects around us. The sensors used by the AP team were specialist air pollution sensors that cost in excess of $400 – an expensive way for cash-strapped newsrooms to counter dodgy data. Since 2008 however, the price has dropped and the growing availability of cheap computing devices such as Raspberry Pi and Arduino and the collaborative and open source ethic of the hacker and maker communities, have lowered the barriers to entry. Now sensors, and the crowd they attract, are a serious option for developing data driven reporting.
Hunting for (real) bugs with data
In 2013, New York braced itself for an invasion. Every 17 years a giant swarm of cicadas descend on the East Coast. The problem is that exactly when in the year the insects will appear is less predictable. The best indicator of the emergence of the mega-swarm (as many as a billion cicadas in a square mile) seems to be when the temperature eight inches below the ground reaches 64 degrees (18C). So when John Keefe, WNYC’s senior editor for data news and journalism technology, met with news teams to look at ways to cover the story, he thought of the tinkering he had done with Arduino’s and Raspberry Pi’s . He thought of sensors.
Keefe could not find a source for the data that offered any level of local detail across the whole of New York. He took the problem of how to collect the data to a local hackathon, organised by the stations popular science show Radiolab, who helped create a “recipe” for an affordable, easy to make temperature sensor which listeners could build and send results back to a website where they would map the information
Whilst sensors play an enabling role in both examples, underpinning both the Beijing AirTrack and Cicada projects is the idea of collaboration. The Beijing project was originally developed by a team from the Spatial Information Lab at Columbia University. Combining the access of the media with the academic process and expertise of the lab gave the project a much bigger reach and authority. It’s a form of institutional collaboration that echoes in a small way in more recent projects such as The Guardian’s 2012’s Reading the riots. The Cicada project, on the other hand, offers an insight into a kind of community-driven collaboration that reflects the broader trend of online networks and the dynamic way groups form.
Safecast and the Fukushima nuclear crisis
On 9 March 2011, Joichi Ito was in Cambridge Massachusetts. He had travelled from Japan for an interview to become head of MIT’s prestigious Media Lab. The same day a massive underwater earthquake off the coast of Japan caused a devastating tsunami and triggered a meltdown at the Fukushima Dai-ichi nuclear plant, starting the worst nuclear crisis since Chernobyl in 1986. Ito, like many others, turned to the web and social media to find out if family and friends were safe and gather as much information as he could about the risk from radiation
At the same time as Ito was searching for news about his family, US web developer Marcelino Alvarez was in Portland scouring the web for information about the possible impact of the radiation on the US’s west coast. He decided to channel his “paranoia” and within 72 hours his company had created RDTN.org, a website aggregating and mapping information about the level of radiation .
For Alvarez and Ito the hunt for information soon developed into an effort to source geiger counters to send to Japan. Within a week of the disaster, the two had been introduced and RDTN.org became part of project that would become Safecast.org. As demand outstripped supply, their efforts to buy geiger counters quickly transformed into a community driven effort to design and build cheap, accurate sensors that could deployed quickly to gather up to date information.
SIDENOTE: It will be interesting to see how the experiences of Beijing and Safecast could come together in the coverage of the 2020 Olympics in Japan
Solving problems: Useful data and Purposed conversations
Examples such as WNYC’s cicada project show how a strong base of community engagement can help enable data-driven projects. But the Safecast network was not planned, it grew
“from purposed conversations among friends to full time organization gradually over a period of time”
There was no news conference to decide the when and the how it would respond or attempt to target contributors. It was a complex, self-selecting, mix of different motivations and passions that coalesced into a coherent response to solve a problem. It’s a level of responsiveness and scale of coverage that news organisations would struggle to match on their own. In that context, Moradi believes that journalism has a different role to play:
Whether they know it or not, they do need an objective third party to validate their work and give it authenticity. News organisations are uniquely positioned to serve as ethical overseers, moderators between antagonistic parties, or facilitators of open public dialogue
Taking a position as a “bridge” between those with data and resources and “the public who desperately want to understand the data and access it but need help” is a new reading of what many would recognise as a traditional part of journalism’s process and identity. The alignment of data journalism with the core principles of accountability and the purpose of investigative journalism, in particular, makes for a near perfect meeting point for the dynamic mix of like-minded hacks, academics and hackers, motivated not just by transparency and accountability. It also taps into a desire not just to highlight issues but begin to put in place solutions to problems. This mix of ideologies, as the WikiLeaks story shows , can be explosive but the output has proved invaluable in helping (re)establish the role of journalism in the digital space. Whether it is a catalyst to bring groups together, engage and amplify the work of others or a way, as Moradi puts it, to “advance the cause of journalism by means other than reporting” , sensor journalism seems to be an effective gateway to exploring these new opportunities
The digital divide
The rapid growth of data journalism has played a part in directing attention, and large sums of money, to projects that take abstract concepts like open government and “make them tangible, relevant and useful to real live humans in our communities”. It’s no surprise, then, that many of them take advantage of sensors and their associated communities to help build their resources. Innovative uses of smart phones, co-opting the internet of things or using crowd funded sensor project like the Air quality egg. But a majority of the successful data projects funded by organisations such as the Knight Foundation, have outputs that are almost exclusively digital; apps or data dashboards. As much as they rely on the physical to gather data, the results remain resolutely trapped in the digital space.
As far back as 2009, the UK government’s Digital Britain report warned:
“We are at a tipping point in relation to the on-line world. It is moving from conferring advantage on those who are in it to conferring active disadvantage on those who are without”
The solution to this digital divide is to focus on getting those who are not online connected. As positive as this is, it’s a predictably technological deterministic solution to the problem that critics say conflates digital inclusion with social inclusion . For journalism, and data journalism in particular, it raises an interesting challenge to claims of “combating information asymmetry” and increasing the data literacy of their readers on a mass scale .
Insight journalism: Journalism as data
In the same year as Digital Britain report appeared, the Bespoke project dived into the digital divide by exploring ways to create real objects that could act as interfaces to the online world. The project took residents from the Callon and Fishwick areas in Preston, Lancashire, recognised as some of the most deprived areas in the UK, and trained them as community journalists who contributed to a “hyperlocal” newspaper that was distributed round the estate. The paper also served as a way of collecting “data” for designers who developed digitally connected objects aimed at solving problems identified by the journalists. A process the team dubbed insight journalism .
One example, the Wayfinder, was a digital display and a moving arrow which users could text to point to events happening in the local area.
Another, Viewpoint was a kiosk, placed in local shops that allowed users to vote on questions from other residents, the council and other interested parties. The questioner had to agree that they would act on the responses they got, a promise that was scrutinised by the journalists.
The idea was developed during the 2012 Unbox festival in India, when a group of designers and journalists applied the model of insight journalism to the issue of sexual harassment on the streets of New Delhi. The solution, built on reports and information gathered by journalists, was to build a device that would sit on top of one of the many telegraph poles that clutter the streets attracting thousands of birds. The designers created a bird table fitted with a bell. When a woman felt threatened or was subjected to unwanted attention she could use Twitter to “tweet” the nearest bird table and a bell would ring. The ringing bell would scatter any roosting birds giving a visible sign of a problem in the area. The solution was as poetic as it was practical, highlighting not just the impact of the physical but the power of journalism as data to help solve a problem.
Stage four: Make data real
Despite its successes sensor journalism is still a developing area and it is not yet clear if it will see any growth beyond the environmental issues that drive many of the examples presented here. Like data journalism, much of the discussion around the field focuses on the new opportunities it presents. These often intersect with equally nascent but seductive ideas such as drone journalism. More often than not, though, they bring the discussion back to the more familiar ground of the challenges of social media, managing communities and engagement.
As journalism follows the mechanisms of the institutions it is meant to hold to account into the digital space, it is perhaps a chance to think about how data journalism can move beyond simply building capacity within the industry, providing useful case studies. Perhaps it is a way to help journalism re-connect to the minority of those in society who, by choice or by circumstance, are left disconnected.
Thinking about ways to make the data we find and the data journalism we create physical, closes a loop on a process that starts with real people in the real world. It begins to raise important questions about what journalism’s role should be in not just capturing the problems and raising awareness but also creating solutions. In an industry struggling to re-connect, it maybe also starts to address the issue of solving the problem placing journalism back in the community and making it sustainable. Researchers reflecting on the Bespoke project noted that:
“elements of the journalism process put in place to inform the design process have continued to operate in the community and have proven to be more sustainable as an intervention than the designs themselves”
If stage three is to make our own data, perhaps it is time to start thinking about stage four of data journalism and make data real.
Alba, Davey (2013) Sensors: John Keefe and Matt Waite on the current possibilities, Tow Centre for Digital Journalism, 5 June. Available online at http://towcenter.org/blog/sensors-john-keefe-and-matt-waite-on-the-current-possibilities/, accessed on 12 August 2013
Alvarez, Marcelino (2011) 72 Hours from concept to launch: RDTN.org, Uncorked Words, 21 March. Available online at http://uncorkedstudios.com/2011/03/21/72-hours-from-concept-to-launch-rdtn-org/, accessed on 12 August 2013
Ashton, Kevin (2009) That “Internet of Things” thing, RFiD Journal 22 pp 97-114. Available online at http://www.rfidjournal.com/articles/view?4986, accessed on 25 September, 2013
Department of Business Innovation and Skills (2009) Digital Britain: Final Report, Stationery Office
BBC (2008) In pictures: Beijing pollution-watch, BBC News website, 24 August. Available online at http://news.bbc.co.uk/sport1/hi/front_page/6934955.stm, accessed on 12 August 2013
Blum-Ross, Alicia, Mills, John, Egglestone, Paul and Frohlich, David (2013) Community media and design: Insight journalism as a method for innovation, Journal of Media Practice, Vol. 14, No 3, 1 September pp 171-192
Bradshaw, Paul. and Brightwell, Andy. (2012) Crowdsourcing investigative journalism: Help me Investigate: A case study, Siapera, Eugenia and Veglis, Andreas (eds) The Handbook of Global Online Journalism, London: John Wiley & Sons pp 253-271
Ellison, Sarah (2011) The man who spilled the secrets, Vanity Fair, February. Available online at http://www.vanityfair.com/politics/features/2011/02/the-guardian-201102 , accessed on 13 September 2013
Gray, Jonathan, Chambers, Lucy and Bounegru, Liliana (2012) The Data Journalism Handbook. O’Reilly. Free version available online at http://datajournalismhandbook.org/
Howard, Alex (2013) Sensoring the news, O’Reilly Radar, 22 March. Available at http://radar.oreilly.com/2013/03/sensor-journalism-data-journalism.html, accessed on 12 August 2013
Kalin, Sari (2012) Connection central. MIT news magazine, 21 August. Available at http://www.technologyreview.com/article/428739/connection-central/, accessed on 22nd August 2013
Knight, Megan (2013) Data journalism: A preliminary analysis of form and content. A paper delivered to the International Association for Media and Communication Research, 25-29 June, Dublin
Livingstone, Sonia and Lunt, Peter (2013) Ofcom’s plans to promote “participation”, but whose and in what? LSE Media Policy Project, 27 February. Available online at http://blogs.lse.ac.uk/mediapolicyproject/2013/02/27/ofcoms-plans-to-promote-participation-but-whose-and-in-what/, accessed on 23 September 2013
Moradi, Javaun (2011) What do open sensor networks mean for journalism?, Javaun’s Ramblings, 16 December 16. Available online at http://javaunmoradi.com/blog/2011/12/16/what-do-open-sensor-networks-mean-for-journalism/#sthash.yXXlHoa2.dpuf, accessed on 9 August 2013
Oliver, Laura (2010) UK government’s open data plans will benefit local and national journalists, Journalism.co.uk, 1 June. Available online at http://www.journalism.co.uk/news/uk-government-039-s-open-data-plans-will-benefit-local-and-national-journalists/s2/a538929/, accessed on 12 August 2013
Rogers, Simon. (2011) Facts are Sacred: The Power of Data (Guardian shorts), Cambridge, UK: Guardian Books
Safecast History (no date) Safecast.com. Available online at http://blog.safecast.org/history/, accessed on 25 September 2013
Sopher, Christopher (2013) How can we harness data and information for the health of communities?, Knight Foundation, 16 August. Available online at https://www.newschallenge.org/challenge/healthdata/brief.html accessed on 10 September 2013.
Taylor, Nick, Marshall, Justin, Blum-Ross, Alicia., Mills, John, Rogers, Jon, Egglestone, Paul, Frohlich, David M., Wright, Peter, Olivier, Patrick (2012) Viewpoint: Empowering Communities with Situated Voting Devices, Proc. CHI 2012 pp 1361-1370, New York: ACM (don’t understand this reference)
Taylor, Nick, Wright, Peter, Olivier, Patrick and Cheverst, Kieth (2013) Leaving the wild: lessons from community technology handovers. in CHI ’13 (don’t understand this reference)
Waite, Matt. (2013) How sensor journalism can help us create data, improve our storytelling, Poynter.org. 17 April. Available online at http://www.poynter.org/how-tos/digital-strategies/210558/how-sensor-journalism-can-help-us-create-data-improve-our-storytelling/, accessed on 28 August 2013
It’s that time of year again where I find myself doing a tour of various rooms and buildings introducing myself to new students. By a quirk of timetabling and course structures I don’t get to see many of them to teach until later in the year. So I spend a bit of time in my intros talking about the benefit of getting their digital presence in hand now. A healthy online presence takes time to grow and develop; it can’t be left until you graduate.
So how do you keep your digital journalism healthy? Why not try these vitamin supplements in your daily routine:
Vitamin A: Aggregation
Social media and the ‘river of news’ is where it’s at but how do you manage the flow of content and information around you. Do you have a reader like feedly or do you use something like IFTTT to collect all your tweets or tweets around a certain tag? Do you bookmark with things like Diigo?
Vitamin B: Brand
Everything is a brand these days and annoying as the word is I still think its one of the best ways to quantify the space between the professional of journalism and the more personal of social media (maybe persona works as well but there isn’t really a vitamin P!) All the negative aspects of the word are just as useful to consider when thinking about how you represent yourself online. So, what are you doing to make sure people see you online? More importantly, who are you online? Do you need a Facebook page? What about google+? Are you confusing your personal and professional audiences or are they the same?
Vitamin C: Community/Curation
Community is not just a buzzword it’s a job description for some journos. The best way to understand a community is to be part of it. Being a journalist that works with/represents a community can often mean simply collecting and presenting the best and most interesting content and conversations that community has to offer. In other words, curation. How do you gather the material you aggregate and present it to the audience? Do you use Storify? What about Tumblr? Twitter lists? Email newsletters?
Vitamin D: Data (also Development)
Data journalism is big news these days so it never hurts to get your head around new tools (import.io for example) In that respect D could also be about development, developing new practical skills. Collecting data and understanding the practice of data journalism are skills that’ll going to be in demand for a while yet. But the industry focus on data is as much about metrics and response to data: Data driven journalism. How are you measuring your engagement with people? Is it followers on twitter or likes on Facebook? Do you need to invest more time in finding other metrics to help you target and develop your content?
Vitamin E: Engagement
The health benefits of the vitamins above are amplified by engagement. Getting out there and connecting with people is key to what you do. Finding new people in communities, or people who can help with a spreadsheet or bit of software. But it needs to be a real connection,; a conversation. So what are you doing to connect over and above a follow or a like? What’s the value of the connections you make to you and the people you connect with? What opportunities are there to meet people in the real world?
Update: They knocked back my second request on the grounds of anonymity. The sample was so small that giving me details might risk identifying someone. That seems fair, but if nothing else the very low number means that in the context of my original thinking the numbers are not in context large enough to suggest a broader story. (taking as read that the individual circumstances are sad and may have warranted reporting at the time)
I always like to test out the stuff that I ask my students to do; don’t make people to do something you wouldn’t try yourself (apart from maybe fitting a gas cooker or disarming a bomb ). So I’ve been collecting data from various places to use in data journalism exercises including FOI requests via Whatdotheyknow.com.
I asked for details of people who had died whilst on student and Tier 4 visas. It was playing out a hunch (just curiosity) I had about a few things, in particular the number of those that would be suicides. I thought it would make interesting data and would be something that might interest students without getting in to the dangerous territory of ‘student stories’
Where possible I would like to know the date, location of their death, gender, age, cause of death and sponsor institution.If you could provide this information in digital form, preferably in a spreadsheet format, that would be very helpful
Here’s the data I got.
Not really what I wanted. The main reason cited was that apart from the information above, was that they were “only able to report on data that is captured in certain mandatory fields on the Home Office’s Case Information Database (CID).” Most of the information I wanted would be in the ‘notes’ section of any records which would need to be located manually.
The Home Office is not obliged under section 12 of the Freedom of Information Act 2000 to comply with any information request where the estimated costs involved in supplying the information exceed the £600 cost limit. I regret that we cannot supply you with the information that you have asked for, as to comply with your request would exceed this cost limit.
Fair enough although I was a bit suspicious that some of the information that would seem to be pretty useful, like sponsoring institution, would not have a field. But I realised that I didn’t really know what fields were in there. In fact I didn’t really know that the Case Information database was where that stuff would be.
Thanks to an FOI by Helen Murphy, I find out that;
All data held on the Caseworker Information Database will fall within a
minimum data set. The Caseworker Information Database contains:
• Date of birth
• Arrival details
• Temporary admission address
• Detention details
• Refusal reasons• Diary actions
• Removal details
More surprisingly it also reveals that “Currently there are over 75 screens on the Caseworker Information Database (CID)”. 75 screens No wonder they can’t find anything!
7 hour days
Helen’s FOI also helped illuminate working conditions at the Home Office. In Helen’s FOI
The £600 limit is based on work being carried out at a rate of £25 per hour, which equates to 24 hours of work per request.
In my response :
This [£600] limit applies to all central Government Departments and is based on work being carried out by one member of staff at a rate of £25 per hour, which equates to 3½ days work per request.
Taking one as a different way of expressing the other ( a dangerous assumption) would suggest less than 7 hour days at the Home office. Still, that seams fair given the number of screens you’d need to wade through. I’d give up after 2 hours!
Groups of 5
The other thing that struck me about the data was the alarmingly uniform numbers that people die in – 5 at a time. It turns out that the figures are not entirely complete *. A note on the data says:
Figures rounded to nearest 5 (- = 0, * = 1 or 2) and may not sum to totals shown because of independent rounding.
Why round them to 5? It’s not like half a person died! Update: In the comments Martin Stabe suggests “This could be an anonymisation requirement so that individual cases cannot be identified from aggregate data.”
Limits of being human
I’ve put another request in on the basis of the data I got, assuming that 10 cases would be manageable by someone in 3.5 days although 75 screens worth of content might yet fox my demand, so I may never get what I want this way.
The truth is that, as data, what I got is next to useless – no real context and the numbers aren’t even accurate, – but it reinforced a few things for me:
- Good FOI’s rely on good planning and some prior knowledge. I’d done a bit or work understanding the whole Tier4/student thing but clearly I needed to do more on understanding who held the data, how and why. Data, in fact journalism, is all about context
- Good FOI’s rarely stand alone. Often an FOI is an enabler. It opens doors, avenues for further questions. That makes it valuable even when the data might be useless.
- Visibility helps. Helen’s FOI answered questions I had. Maybe mine won’t but It’s in the mix.
- Open government doesn’t just rely on data. It relies on the capacity to retrieve and search that data. Government is really good at collecting it and shockingly bad at having it in a form that is usable even to themselves. (but we all knew that didn’t we)
Not new or startling revelations but it never hurts to be reminded of these things from time to time.
* for ‘not entirely complete’ read ‘bugger all use’
I’m writing a book chapter on data journalism (I know, who isn’t these days) which I’ll share more of when it makes sense! But one of the areas that is giving me real pause for thought at the moment is the question of how much data journalism contributes to the democratic process.
Data journalism is fast becoming a motif for a range of challenges and opportunities in journalism; Data journalism is about integration of new technology and skills; It’s about (re)discovering a role for journalism in a changing media landscape; it’s about industries capacity to save itself.
But more often than not, the general consensus is that it’s about the reinvigoration of journalism as part of the fourth estate. In fact really well known and kind of cool people tell us that’s what it needs to be.
The shot in the arm data has given to investigative and political reporting coupled with a willingness not only to participate in but campaign for a transparent and open data culture would seem to answer the question straight off the bat. Look at MP’s expenses, look at wikileaks.
The powerful claim to operate in a new and open way (open government and all that) and data is their proof and so we, armed with the new tools to understand that data continue in our duty to hold them to account. Good data journalism goes a stage further and makes data available, in context, to the ‘public’. Not only does that make for great engagement and better journalism but we give the audience the tools with which to fully understand and so participate in the democratic process. Job done!
Or is it? Whilst it’s holding the accountable to account, is the process of data journalism really producing ‘tools’ that people can use in the democratic process?
Does making a spreadsheet available to users really democratise information? Does making something searchable by postcode really make it more useful on the ground? Isn’t it just creating a small, equally uncountable, data elite? Is it really just a good way to reposition (consolidate) journalism as gatekeepers?
Part of this is wondering what tools people really need to be part of a democracy. Does the general disaffection with the political process (in the UK anyway) mean that the majority of data journalism, which focusses on the business of government and big institutions (often because that’s where you can get the data), already lacks relevance ? Is the dependence on online technologies for processing, distribution and presentation of this stuff really helpful in an environment where technical literacy in these areas a problem?
Accountability or utility. What’s data journalism really about?
Over the last month my department has had a number of accreditation visits. Two of the training councils that, in the UK at least, inspect, accredit and generally rubber stamp what we do, the BJTC and the NCTJ, have both been in looking at our courses. Thanks to a lot of hard work by colleagues all of our courses get the seal of approval. Hurray!
Both visits included a lengthy session of questions for the course team around the why and how of what we do. For the most part, they are always useful and constructive; lots of things to reflect on and change to keep improving what we do. But sitting through the process raised a bit of a point to ponder for me.
Given the relative focus of each of the accrediting bodies (Broadcast for the BJTC and print for the NCTJ) it was interesting that both asked about the public facing provision and 24/7 nature of our output. The question really amounting to ‘do you have a 24/7 public facing news operation?’
Learning by doing is something that we pride ourselves (and something we are told to do more of) on but when we learn we make mistakes and mistakes in journalism, in public, can be a learning experience. It has real impact on people and, well let’s be frank, it can cost money – not one of the learning outcomes of our course the last time I looked! So we try to give as many public facing opportunities as we can but often keep what we do, though with no less of demand that the stories are real and newsworthy, internal.
Within the university world there are also opportunities for people to engage in other media – student newspapers and media have always been traditional stomping grounds for our students. But as a division, apart from the usual advice and support for those working on stories, we don’t have any involvement in the paper. It’s (rightly so in my view) a student union publication and independent from us.
More recently we have also come under pressure to make what we do more entrepreneurial. Making students aware of the opportunities of social media and how they can use things like blogs etc. to promote themselves and reach a niche is, I think part of that. We’ve seen that work (and all credit to the students here) in things like blog preston, the preston messenger and more. The burgeoning hyperlocal/local media market could and should be a rich vein for students to explore and develop their carrear chances.
Just because we can…
So when I hear the question about 24/7 news operations here is what I ponder – should we really be doing that?
- Should we as a public funded body (unless the government really get the claws out) plonk ourselves in to that landscape and risk flattening or at the very least skewing the local media economy? Even a relatively small journalism school represents an effective staff far in excess of most local newsrooms.
- If we make it self-sustaining and sell ads (and measure success in a business like way encouraging that business focus many say we lack) then don’t we simply add more weight to that flattening effect? If I added our marketing and business courses to the mix of numbers….
What I’m also pondering is why organisations that claim to represent the interests of media organisations are also advocating that education organisations do that. Yes, on the face of it students will gain experience (although I don’t see that it’s the only or best way to do it) but at what costs to the organisations or media landscape the students are looking to work in?
Having sat in many a room listening to regional and local news orgs bemoan the impact the BBC has on competition, it feels like a very strange day when I sit in a room and hear more than one regional news editor advocating the setting up of direct competition.