top of page

Search Results

22 items found for ""

  • New research must be better reported, the future of society depends on it

    This article originally appeared in The Conversation and is published under a CC BY Licence Newspaper articles, TV appearances and radio slots are increasingly important ways for academics to communicate their research to wider audiences. Whether that be the latest health research findings or discoveries from the deepest, darkest parts of the universe. In this way, the internet can also help to facilitate these channels of communication – as well as discussions between academics, funders and publishers, and citizen scientists and the general public. Yet all too often research-led stories start with “researchers have found”, with little mention of their names, institution and who funded their work. And the problem is that by reporting new research in this way, it fails to break down the stereotypical image of an ivory tower. For all readers know these “researchers” might as well be wearing white lab coats with the word “boffin” on their name badges. Rolling news News is now a 24-hour operation. Rolling coverage of stories means journalists have their work cut out in maintaining this cycle. But that is no excuse for missing out important pieces of information that underpin a story. Take for example a story relating to health research that has wide ranging societal impact. Supporting evidence, links and named academics help a story’s authenticity and credibility. And at a time when “fake news” is an increasingly sticky problem it becomes essential to link to the actual research and therefore the facts. This is important, because research goes through a peer review process where experts in the same field of research critically assess the work before it can be published. This is similar to news stories that are edited to ensure they are of good quality – although this process takes far less time. Accurate reporting In academia there has been a huge move to make research openly available and therefore accessible for the whole of society. While research institutions are making great strides in public engagement and the wider understanding of science, media organisations still remain instrumental in that process. And while it’s been claimed that the public are tired of experts, the impact they have on society – from building skyscrapers to keeping us alive – is undoubtedly fundamental to our existence. But poor or incomplete reporting undermines respect for experts by misrepresenting the research, especially by trivialising or sensationalising it. So while academics from various disciplines are often willing to talk to the media – either as an author or from an independent expert viewpoint – misreporting of research and particularly data (whether intentional or unintentional) has a negative effect. Academics are then vilified as having something to hide or accused of making up their research, while members of the public are exposed to unnecessary anxiety and stress by inappropriate headlines and cherry picked statistics that are reported in a biased way. The public good Of course, not everyone will want to check the citations and research outputs – and not everyone has the critical skills to assess a piece of specialised academic writing. Yet there are lots of people who, given the opportunity, would be interested in reading more about a research topic. Media coverage opens up a democratic debate, allows people to explore the works of an accomplished researcher and helps the public understanding of science. And in this way, fair and accurate reporting of research encourages academics to be willing to work with the media more regularly and build good working relationships. Not only that, but the proper and accurate communication of science is beneficial to the whole of society – from the government to its citizens. So in the age of “fake news” it is more important than ever to make sure that what’s being published is the truth, the whole truth and nothing but the truth.

  • Disentangling the academic web: what might have been learnt from Discogs and IMDB

    The post below was originally published in the LSE Impact of Social Sciences Blog under a CC BY 2.0 Licence Academia can always learn a lot from the rest of the world when it comes to working with the web. The project 101 Innovations in Scholarly Communications is a superb case study, highlighting the rapid growth in academic and associated web platforms. As a result there is an increasing problem for academics when they come to choose their platform or tool for carrying out their work on the web. Choice is good, but too much can lead to decision fatigue and anxiety over having to adapt to more and more new tools and make decisions as to their value. In the last decade various organisations, academics and start-ups have noticed gaps in the market and created tools and websites to help organise and communicate the work of academics. This is now arguably having the negative effect of researchers not knowing where to invest their time and energy in communicating, sharing and hosting their work, as no one can use every platform available. Even by linking many of them there are still issues around their maintenance and use. In hindsight, academia could have learned from two successes of the internet era. Discogs and the Internet Movie Database (IMDB) are two of the most popular websites on the planet. Each is authoritative and seen as the ‘go to’ platforms for millions of users interested in music and film respectively. IMDB is ranked at #40 and Discogs at #799 in Alexa, a global internet ranking index of websites. IMDB was one of the original internet sites launched in 1990, with Discogs arriving a decade later in 2000. Whilst there are other similar websites, there are few that even come close to their user numbers and the huge amount of specialised content they host. By contrast, academia has tried desperately to place large swathes of information under umbrellas of knowledge, but it all feels a bit too much like herding cats. Academia has always made use of the web to have discussions, host research and institutional websites but has failed to control the number of newer platforms that promise to be an essential tool for academics. Over the last decade – and notably in the last five years – hundreds of tools that aim to enhance a researcher’s workflow, visibility and networks have been created. Many of these do indeed offer a service: Figshare hosts research outputs; Mendeley manages references; and Altmetric.comtracks attention. They are all superb and offer something befitting academia in the 21st century. The problem for many academics is that they struggle to engage with these tools due to the overwhelming number to choose from. If you want to manage references, do you use Endnote, Mendeley, Zotero, RefMe, ReadCube or Paperpile? If you wish to extend your research network do you sign up for ResearchGate, Google Scholar, Academia.edu, Piirus, LinkedIn or even Facebook? This is before we tap into the more niche academic social networks. Then there is the problem of visibility; how do you make sure your fellow academics, the media, fund holders or even members of the public can find you and your work? ORCiD obviously solves some of this, but it can be seen as a chore and another profile that needs configuring and connecting. As research in the 21st century continues on its current trajectory towards openness and impact, and as scholarly communications develop, there will no doubt be yet more tools and platforms to deal with all that content and communication. If we think about making data accessible and reusable, post-publication open peer review, as well as making other research outputs available online, we may see a more tangled web than ever before. What Discogs could teach us Like so many of the post-Web 2.0 academic interactive platforms, content is driven by the users, those being academics and supporting professionals. Of course, a large number of formal research platforms have remained as they were, hosted by institutions, research bodies, funders and publishers. Yet more and more research outputs are being deposited elsewhere such as GitHub (which has a comparable internet ranking to IMDB), Figshare, Slideshare, ResearchGate and Google Drive, to give just a few examples. How can we compare the research world with Discogs? In my mind Discogs is not too dissimilar to the research world and all of its outputs. Listed below are some of the similarities between them. Those who have used Discogs will hopefully make the connection quicker than those who have not. IMDB and Discogs can be searched in various different ways, all of which allow a person to drill deeper into an area of the database or move around serendipitously using the hyperlinks. So with Discogs you may know the title of a song but not the artist, or you may know what label it was released on. You may also be keen to track down a particular version of a release based on geographical, chronological or label data. The search functions of Discogs may not be as complex as a research database such as Medline, but for the typical Discogs user this is not essential. What are the big problems a Discogs or IMDB-type site could solve? Version control With growing interest in academic publishing platforms that capture the various stages of publishing research, there is a problem of ensuring those searching for that research find the version they really want. We have the final, peer reviewed, accepted and formatted version; the report the paper may have contributed to; the pre-print; the early draft; the research proposal; and the embryonic research idea. Research platforms such as ROI aim to capture as much of this research process as possible. Unique identity ORCiD is a great tool for aligning the work of one person to their true identity (especially so for early career researchers or academics who change their name mid-career, for example). You do not have to have the common surnames of Smith, Brown, Taylor or Jones to be mistaken for another researcher, less common-named academics also have this problem. If a researcher publishes using their middle initial and then without, it can create multiple identities in some databases and tying them all together is not always straightforward and can be time consuming. In Discogs, an artist or band is listed with all name variations collected under the most commonly used title. ORCiD allows this, but sadly the problem is already very extensive. Additional research outputs Additional research outputs The mainstay of academic output is the journal paper but that is not the case for some areas of research. There are artistic performances, computer code, software, patents, datasets, posters, conference proceedings, and books, among others. Some stand alone, whilst there are increasing numbers of satellite outputs tied to one piece of research. For example, in Discogs we might think of the LP album as the definitive item and of the single, EP or digital release as outputs resulting from that. For research this may be the report or journal paper with attached outputs including a poster, dataset and conference presentation. Interaction with the research data Each of Discogs and IMDB allows users to interact with its huge database of information. Users can set up accounts, add music and films to their personal collections, leave reviews and contribute knowledge. To flip that into an academic context, that might mean users saving research artefacts to a reference management package, leaving open peer review comments and contributing their own insights and useful resources. Such a platform would not operate in isolation, as there would still be a need for other connected web presences to exist. Social media, such as Twitter, to communicate to wider audiences; publication platforms to host all of the published research; tools to manage references and track scholarly attention. Other tools would also be needed to help conduct the research, analyse and present results and data, create infographics, take lab notes, collaborate on documents and create presentations. Then there is the issue of who would oversee such a huge database, manage it and ensure it is kept largely up to date. Of course with something similar to Discogs and IMDB anyone could enter and update the content, with proper accreditation, audit trail and moderation. Such a platform would have been accessible to funders, charities and public, with certain limitations on access to certain content. Hindsight is a wonderful thing and given how IMDB and Discogs have grown into such well-known and used platforms it is a shame that the same did not happen in academia to help create such a central hub of knowledge and activity.

  • 0 is the magic number: Why small numbers matter just as much as large ones when we talk about Altmet

    The post below was originally published in the LSE Impact of Social Sciences Blog under a CC BY 2.0 Licence A lot has been written in the last couple of years about altmetrics and the score that comes with them. Whether that be the Altmetric.com, ResearchGate or Kudos’ score to name but a few. Some of the tools focus in different areas with Altmetric.com being one that tries to capture a broad range of data from scholarly and public communications. With that comes their own Altmetric.com score that is weighed depending on what platform was used. For example, a Tweet is worth one point, a blog post five and a news article eight. Hence with so many of these metrics, including traditional ones like the impact factor score, h-index and citation count, the bigger the number the better. With Altmetric.com that may be good but not wholly useful, as small numbers, especially 0 can tell us a lot too. The real value in altmetrics does not come from the score but that it measures previously ignored research outputs, such as individual papers and datasets. Also it shows us where these outputs are being communicated or in the case of an altmetric score of 0 – not communicated. The score of zero is to some extent more important than 50, 100 and higher. It tells us that this research has not been shared, discussed, saved or covered in the media. In a world increasingly governed by impact, scholarly communication and dissemination of your research, the straight flat zero indicates a possible need to communicate your work. Of course detractors of such systems will point to such as the Kardashian index and that science is not about popularity, or the ability to communicate beyond a niche group. It is about the ability to complete rigorous and quality research: ie research that is captured in journals, repositories, data management systems and shared at conferences. Yet when so many systems for global scholarly communication exist, why not use them? Given that many research papers are never cited then it should follow that they will never be Tweeted, shared, blogged or saved in Mendeley. Yet as the research world increasingly uses social media to communicate with the wider world, whether that be publishers, charities, funders or the general public, the ease in which academics can communicate their work is apparent. If a researcher’s altmetric score is 0 it may seem depressing to think no one has shared or communicate their work but it does offer them a starting place in this new world. Unlike citations, it is an instant feedback loop, if you want to act upon that remains your choice. Whilst critics may be wary of gaming altmetrics scores, and rightly so, the number 0 tells us something potentially important. That either no one knows your research exists and are yet to discover it or that sadly no one is interested in it. Obviously we cannot say this for sure, as we are just talking about active participants on the web, whether that be a discussion forum, blog, news or social media. There are many academics not engaged on the social web who one day with the aid of a literature search or conference presentation will discover your work. At least with the altmetric score of 0 you can only go up, no one can get a negative altmetric score. So this means investigating who and where to share your research. The problem detractors have with altmetrics is that they are concerned we are focusing just on the numbers. It is a legitimate concern, how many Tweets your paper gets is not an indication it is a good piece of research. Yet that has always been the case, a high number of citations has not always indicated high quality research. The chances are that it is a good piece of research, but we can take nothing for granted in academia. If we were to put it into a sporting context and cricket, having the highest batting average or scoring the most runs in a team has never been an indicator of the best player. Given that it does give us insight that we are looking at some of the best of the bunch, it is merely a useful indicator. As with altmetrics, this is what they are, indicators of communication and interest of varying levels. So the concern is that funders, managers, journals might start to pay too much attention to big numbers. This in turn might cause some to gameplay to increase those numbers, but that was always the case pre-altmetrics. Journal editors have been known to ask authors to cite papers from their own publications and it’s not unheard for authors to self-cite. Whilst some of this might sound like counter arguments to altmetrics, they are not. We do need to have discussions about what we want from altmetrics. Many academics would be lying if they denied they were not interested in where their research was being discussed on the web. The useful by-product from altmetrics is that we have a much better idea if our research is not being discussed at all. The score 0 may be as significant as 1000 for some academics as it tells them that no one is talking about their research. The bigger problem is that it has become confusing when there are several platforms that generate their own metrics, Altmetric.com, ResearchGate and more public tools such as Klout have their own scores. It could start to feel like the early days of Ebay before commercial companies set up profiles and generating a 100% feedback score became paramount. These days it is not so important, big feedback scores on ebay mean nothing more than they have sold lots of stuff, one negative makes no difference. Whilst academics will become increasingly aware of the newer metrics, some may be shocked by the succession of zeros by their outputs, especially when so many could be highly cited. The solution is to explain why this happens should they wish build on that score. The scores do not convey the quality of their work or standing, but for those wanting to reach out or looking for feedback on how this is going, then the number 0 is a sign that the only way is up.

  • Many a true word is spoken in jest: Twitter accounts that mock, self-ridicule and bring a smile to a

    The post below was originally published in the LSE Impact of Social Sciences Blog under a CC BY 2.0 Licence For many academics Twitter stands out above any other social media tool as their platform for open and scholarly engagement – partially thanks to its immediacy, reach and convenience but also because it can be a break from the formality of academic writing and dissemination. Twitter’s limit of 140 characters means academics have to think carefully not just about what they say but also how they say it. Twitter allows communications to be snappy, sharp and on many occasion quick-witted. If Twitter has done anything for the academic community it has brought the research conversation out into the public domain. As a by-product, a small number of accounts that mock, self-ridicule and bring a touch of humour to the very serious world of research have flourished. Welcome to the weird and wonderful world of academic Twitter. Research Wahlberg @ResearchMark 1239 Tweets, 15.1k Followers Research Wahlberg is reminiscent to an old Tumblr blog titled ‘Hey girl, I like the library too’, that was a collection of memes of fellow Hollywood heartthrob Ryan Gosling. The purpose of that blog was to post images of Gosling, smouldering and sexy, saying things like: ‘Hey girl you know I would never publish in anything but an open access journal, because changing the existing unsustainable model of scholarly communication is really important to me, you know?” @ResearchMark we see a similar collection of memes interspersed with retweets that aren’t as masterful as Gosling’s chat up lines, but we are open to the idea that Research Wahlberg has a bit more to him than just cheesy chat up lines this time. Potentially started out of angst and annoyance to the peer review model, Shit my Reviewers Say aim is in; “Collecting the finest real specimens of reviewer comments since 1456”. Twinned with a Tumblr blog of the same name it sets out to document the malicious, pinnikity and sometimes confusing world of blind peer review. The pinned tweet sums up the collection nicely as one poor researcher is put to the sword with the line; “I am afraid this manuscript may contribute not so much towards the field’s advancement as much as toward its eventual demise.” Most academics who have ever been on the receiving end of reviews that required major corrections will know that sinking feeling they get when reading such barbed feedback. Whether all of these comments are fact or fiction, or amended we do not know, but there is plenty for researchers to take solace in, especially when they next receive such joyous feedback on their paper. A social experiment by Associate Professor Nathan C Hall, Shit Academics Say is a mixture of funny one liners, memes and clever irony. With an impressive 171,000 followers it is fair to say that this feed has resonated with the academic community and possibly beyond. Certainly well worth following if you are after reassurance, a good laugh or to gawp at what academics can be capable of delivering. The accompanying blog explains why the micro-blog feed appeared with Hall saying: “Like many academics, I have never been completely comfortable with the peculiarities, predilections, or pretensions of our profession.” With snippets of advice in less than 140 characters such as; “If you can’t say anything nice, say it as a question.” and “I don’t make mistakes. I create teachable moments” there is much to take from this stream of consciousness. Another academic Twitter account with an impressive number of followers for this visual collection of tweets where female academic problems are captured in scenes acted out by Lego characters. Sadly just one tweet so far in 2016 but still worth keeping an eye on, especially if you work in a lab. Whilst the Lego Research Institute might be something you ask for next time your birthday comes round. Another visual account that does not fall short when it comes down to effort. As you can imagine from the the title, this is a comic, or series of cartoons. With an accompanying website and movies PhD The Movie 1 and 2. Created by Jorge Cham in 1997, PhD Comics is about ‘life (or the lack thereof) in academia’. The website has impressive pageview stats into the tens of millions each year and has moved on from the original monochrome version to full colour. The sheer breadth of content and issues touched on around undertaking a PhD and working in academia is incredible. For those who have an adverse fear of failure it might be reassuring to follow the latest tweets and know that you are not alone. Improbable Research aims to ‘highlight research that first makes people laugh and then think’. With the accompanying website http://www.improbable.com/ and YouTube account It has its own Ig Nobel Prize Cermemony and Lectures. The Ig Nobel Prizes aim to celebrate the unusual, honour the imaginative and spur people’s interest in science, medicine and technology. The recording of 25th First Annual Ig Nobel Prize Ceremony can be viewed here. All the improbable research captured by the Ig Nobel Prize can be followed through their Twitter account. Tweets are interspersed with updates from previous prize winners, videos and events. From the outside looking in at this account, especially the 2015 prize ceremony video, it makes academia look like a weird cult – which of course it is, but we don’t like to talk about that. There are no shortage of fictitious and spoof social media accounts, especially on Twitter. Most fail miserably or dry up after a few carefully scheduled tweets, it’s reasonable to consider that some probably start as an axe to grind and soon disappear or run out of ideas. As with some of the aforementioned accounts, when it is done well it can be profound, hilarious and even add quality to the academic conversation. Most communicate in a language that is coherent to those outside of the ivory towers, and perhaps by the number of followers some accounts have, highlights that more academics have a funny bone than we give them credit for. Despite the silliness of the above accounts they do often discuss issues in the academic community rarely touched on so publically. They are if anything a light take on a business that often takes itself too seriously. When we pull back the curtain we do see a different side to academia that can be a very strange business, filled with its own language and culture. So if you are on Twitter, love academia, occasionally feel alone, an imposter or are after some geek humour there is an online world out there waiting for you to follow

  • Jobs.ac.uk Talk - How to hack your research

    23 ways to communicate and showcase your skills by Andy Tattersall. The Digital Academic: Tools & Tips for Research Impact and ECR Employability Social media and digital tools are now a staple of many researcher practices and have brought new dimensions to publishing, finding and organising information, problem solving and results sharing. But what does it really mean to be ‘a digital academic’? How can you build your online academic profile via social media? Maybe you don’t think you have time or you don’t know what to do first? Do hiring committees actually care about your 'digital academic impact'? To help you identify the must-have technologies and tools for being a modern digital academic and the skills to manage them successfully, jobs.ac.uk and Piirus are hosting an exclusive half-day workshop event. This chapter of the presentation is done by our speaker Andy Tattersall. Andy Tattersall is an Information Specialist at ScHARR, The University of Sheffield. His role is to scan the horizon for opportunities relating to research, teaching and collaboration and maintain networks that support this.

  • When it comes to information overload, we’re like frogs in boiling water

    The Post was originally published in The Conversation under a Creative Commons CC BY 2.0 Licence To consider how being constantly connected through computers and mobile devices has encroached on our working lives, consider the experiment about the frog in a pan of boiling water. A frog in a pan of cold water that is gently heated will not realise it’s boiling to death if the change is sufficiently gradual. In the same way, the web has affected our attention span and so our productivity – slowly but surely the heat of distraction has increased as decades of internet evolution has added email, websites, instant messaging, forums, social media and video. Striving to manage technology better or wean ourselves off from distractions such as social media updates or emails can be very hard, if not virtually impossible for some. It requires serious willpower. Lock-down What’s the answer for today’s organisations – lock-down and block, and risk restricting access to genuinely useful content and services? Blocking and locking-off parts of the web can only hinder progress and innovation, or by reacting to slow to change and innovation as seen in the NHS can have a negative impact on technology uptake, especially now the internet is now made up of things. If we are to advance knowledge, it’s essential to have access to the full gamut of content online. Whether that’s to study the effects of pornography on society or for a student’s private consumption, we have to be mature about this, there is some content on the Web that will always be demanded. In fact the government’s efforts to deal with online pornography has led to the over-zealous use of internet filters. Dumb filters performing keyword filtering inevitably led to legitimate sex education websites being blocked. Procrastination is not new and there will always find new and inventive ways of putting-off work. But there are means to help tackle that distraction, if only for some rather than all of the time. Eat that frog The problem with digital distraction is often starts from the first moment we sit down at our desks, or even before we’ve got there. Once we open our email we are drawn into conversations, questions and broadcasts. The more emails appear, the more we feel compelled to deal with them. A useful solution involves that frog again: we all have tasks we ignore and delay, nagging away at the back of our minds. We have to complete these tasks, so why not start your day by doing just that and eating that frog: instead of checking frivolous updates and emails, tackle an important task that’s hanging around first thing in the morning. The Pomodoro Technique The popular Pomodoro Technique, which suggests using 30 minute time slots for a single task, followed by a break, can be helpful in dedicating time to specific projects. Another way to reign in distraction is to create lists or use time management apps like 30:30 or Wunderlist. These help set up a structured pattern to the working day, which is especially useful if you need to use social media professionally but also need to carve out time to get other things done. Meditate Meditation and mindfulness has gained much attention in the last couple of years, such as Andy Puddicombe’s popular Headspace imprint. In a busy office this offers a sensible solution to problem of losing focus. Just five minutes meditation could help quiet the mind and return focus to completing the current task. Various studies have highlighted the benefits of meditation and mindfulness on a digital worker’s productivity, and general happiness too. Create an alternative productivity calendar Paper diaries are still often used, if less so with the modern proliferation of electronic alternatives. These often dictate the modern worker’s routine, so much so that they fill in the spaces with fractured and incomplete tasks. Another solution is to create a personal online calendar to overlay a work calendar. By scheduling everything, from checking social media and emails to family time and free periods, it’s possible to make better use of the time you have.

  • Internet of things devices meant to simplify our lives may end up ruling them instead

    The post below was originally published in The Conversation - re-published under CC BY 2.0 Technology’s promise of wonderful things in the future stretches from science fiction to science fact: self-driving cars, virtual reality, smart devices such as Google Glass, and the internet of things are designed to make our lives easier and more productive. Certainly inventions of the past century such as the washing machine and combustion engine have brought leisure time to the masses. But will this trend necessarily continue? On the surface, tech that simplifies hectic modern lives seems a good idea. But we risk spending more of the time freed by these devices designed to free up our time through the growing need to micromanage them. Recall that an early digital technology designed to help us was the continually interrupting Microsoft Office paperclip. It’s possible that internet-connected domestic devices could turn out to be ill-judged, poorly-designed, short-lived technological fads. But the present trend of devices that require relentless updates and patches driven by security threats and privacy breaches doesn’t make for a utopian-sounding future. Technology growth in the workplace can lead to loss of productivity; taken to the home it could take a bite out of leisure time too. Terry Gilliam’s futuristic film Brazil was set in a technologically advanced society, yet the future it predicted was dystopic, convoluted and frustrating. Perhaps we’re heading down a similar path in the workplace and home: studies show that after a certain point, the gadgets and appliances we employ absorb more time and effort, showing diminishing marginal returns. We’re told to change passwords regularly, back up content to the cloud and install the latest software updates. Typically we have many internet-enabled devices already, from computers, phones and tablets to televisions, watches and activity trackers. Cisco predicts that 50 billion things will be connected to the internet in five year’s time. Turning such a colossal number of “dumb” items into “smart”, web-connected devices could become the biggest micro-management headache for billions of users. Security updates for your internet fridge or web toaster? What happens when one causes it to crash. Once you bought a television, turned it on and it entertained you. These days it could be listening to your private conversations and sharing them with the web. That’s not to say a television that listens is bad – it’s just another concern introduced thanks to this multi-layered technology onion that’s been presented to us. Good for some, not necessarily for all Some smart technologies are designed for and better suited to certain groups, such as the elderly or disabled and their carers. There are genuine, real-world, day-to-day problems for some people that something like Google Glass and an internet-enabled bed could solve. But the problems that affect anything that’s computerised and internet-connected re-appear: patches, updates, backups and security. Once we wore glasses until our prescription ran out and the only update a person applied to their bed was to change the linen for a cleaner version. Internet of things devices and online accounts are unlikely to take care of themselves. With so many dissimilar devices and no uniformity, managing our personal technological and digital identities could be an onerous task. Much of this will is likely to be managed via smartphones, but our dependence on these tiny computers has already demonstrated negative impacts on certain people. Could we witness a technological version of Dunbar’s Number, which suggests there’s a limit to the number of people we can maintain stable social relationships with? Perhaps we can realistically only manage so many devices and accounts before it gets too much. Too much choice Facebook founder Mark Zuckerberg famously explained that he wears the same T-shirt every day to reduce the number of decisions he has to make. Yet technology keeps pushing us towards having to make more decisions: how we respond to emails, which software to use, how to update it, interacting on social media – and that’s before we start getting messages from our internet-enabled bathroom scales telling us to shape up. You only need to watch the weekly episodes of BBC Click or Channel 5’s Gadget Show to see the rapid pace with which technology is moving. Technological complexity increases – and what reaches the marketplace are essentially unfinished versions of software that is in a perpetual state of beta testing and updating. In a highly-competitive industry, technology companies have realised that even though they cannot legally sell a product with a shelf life, there is little to gain by building them to last as long as the mechanical devices of the last century, where low-tech washing machines, cars and lawn mowers wouldn’t face failures from inexplicable software faults. Of course some will find their lives improved by robot cleaners, gardeners and washing machines they can speak to via their phone. Others will look to strip away the amount of technology and communication in their lives – as writer William Powers did in his book Hamlet’s Blackberry. The majority of us will probably just be biting off more than we can chew.

  • How to avoid bogus health information on the web

    The post below was originally published in The Conversation and republished CC BY 2.0 Health is one of the biggest topics searched for on the web, yet despite its importance a large portion of this information is inaccurate, anecdotal or biased. According to Pew Research, 72% of internet users in the US search for health information. In the UK, the Office for National Statistics said that 43% of users searched for health information in 2013. The empowering of patients to understand and manage their own health is an important issue at a time when departments are under increased pressure. The NHS is keen to encourage the public to take better care of their health, to know how to spot the early symptoms of bowel cancer for example. But given that inaccurate online information is now just part and parcel of the web, should a universal quality kitemark be applied to good sources to help health consumers make better decisions? Drinking from a fire hose There has been no shortage of articles written about the problems of accessing poor health information on the web. One paper in the Lancet in 1998 quoted a US public health official as saying: “Trying to get information from the internet is like drinking from a fire hose, and you don’t even know what the source of the water is.” Seventeen years on this problem still remains. Many people – and patients – don’t realise the origins of some of this health information, just that it was on the first page of Google’s search results. This equates to the idea that a page-rank relates to quality, yet many good health organisations and charities don’t have the resources to optimise their search results position. All too often searches take users to results such as Yahoo Answers, or some spurious website that claims to sell the product from an online snake oil salesman that can cure them of their ailments. Their existence proves there is very much a market for health cures that have no clinical evidence as to their effectiveness. Very little attention is also paid to factors such as authorship, web links, date of publication, who is behind the website and whether they have ties to commercial companies. Web 2.0 and social media not only allowed consumers to find information on the web and discuss it, but made it far easier for anyone with a motive to publish, a potentially dangerous scenario in a healthcare context. There are high-quality health information websites that offer comprehensive services from symptom checkers to peer-support groups. Despite this, the issue still remains, that aside from those like NHS Choices and Boots WebMD how do patients know which websites to trust? Comprehensive health websites built on knowledge and impartiality such as Patient.co.uk and Netdoctor and, in the US, the Mayo Clinic, vie for attention among the many forums, blogs and websites providing inaccurate and potentially harmful information. Flying kites So what can be done to give users more trust in particular websites? The NHS could encourage users to access and critique good health information – the NHS have already done this by targeting marketing towards specific health groups. Then there is The Information Standard– a certification programme run by NHS England for organisations who produce evidence-based healthcare information for the public. This could also be more widely spread to online content and promoted. Gaining the kitemark requires that information is clear, accurate, balanced and up-to-date. Another non-for-profit organisation that tries to separate the good from the bad, similar to The Information Standard, is Health on the Net. HON were founded 20 years ago in Geneva and also provide a kitemark for quality information on the web. The problem for both of these certifications is that most patients are probably not aware of them, despite The Information Standard certifying 250 health-related websites and HON 5,000. And a small badge at the foot of a web page means users are no more likely to be pay heed than to the terms and conditions of Facebook. Critiquing information Digital literacy remains a big challenge in modern society. Many socio-economic groups are either excluded from using the web or do not have the level of skills to critique and assess online information. Applying quality standards or kitemarks on a site can only do half of the job. In an age where web users become increasingly impatient to find information it becomes also becomes increasingly important for them to have clear signposting. For patients already in contact with services, front-line healthcare staff – perhaps with some training – could help to teach patients how and where to find the best information about their conditions and symptoms and how to critique the results they find. Health consumers all want different things from the web, some search for health information for assurance, others for discussion, some for answers and knowledge. Official health campaigns encouraging people to be aware of potential symptoms is good, but teaching them where to access good information for multiple conditions any time is surely better. At least through a programme of information education and the development of UK health web standards not unlike the Health on the Net organisation, patients could confidently gain a better understanding of their symptoms and conditions and use this knowledge to improve their health.

  • Peer review is fraught with problems, and we need a fix

    The post below was originally published in The Conversation under a CC BY 2.0 Licence Dirty Harry once said, “Opinions are like assholes; everybody has one”. Now that the internet has made it easier than ever to share an unsolicited opinion, traditional methods of academic review are beginning to show their age. We can now leave a public comment on just about anything – including the news, politics, YouTube videos, this article and even the meal we just ate. These comments can sometimes help consumers make more informed choices. In return, companies gain feedback on their products. The idea was widely championed by Amazon, who have profited enormously from a mechanism which not only shows opinions on a particular product, but also lists items which other users ultimately bought. Comments and star-ratings should not always be taken at face value: Baywatch actor David Hasslehoff’s CD “Looking for the Best” currently enjoys 1,027 five-star reviews, but it is hard to believe that the majority of these reviews are sincere. Take for instance this commentfrom user Sasha Kendricks: “If I could keep time in a bottle, I would use it only to listen to this glistening, steaming pile of wondrous music.” Anonymous online review can have a real and sometimes destructive effect on lives in the real world: a handful of bad Yelp reviews often spell doom for a restaurant or small business. Actively contesting negative or inaccurate reviews can lead to harmful publicity for a business, leaving no way out for business owners. Academic peer review Anonymous, independent review has been a core part of the academic research process for years. Prior to publication in any reputable journal, papers are anonymously assessed by the author’s peers for originality, correct methodology, and suitability for the journal in question. Peer review is a gatekeeper system that aims to ensure that high-quality papers are published in an appropriate specialist journal. Unlike film and music reviews, academic peer review is supposed to be as objective as possible. While the clarity of writing and communication is an important factor, the novelty, consistency and correctness of the content are paramount, and a paper should not be rejected on the grounds that it is boring to read. Once published, the quality of any particular piece of research is often measured by citations, that is, the number of times that a paper is formally mentioned in a later piece of published research. In theory, this aims to highlight how important, useful or interesting a previous piece of work is. More citations are usually better for the author, although that is not always the case. Take, for instance, Andrew Wakefield’s controversial paper on the association between the MMR jab and autism, published in leading medical journal The Lancet. This paper has received nearly two thousand citations – most authors would be thrilled to receive a hundred. However, the quality of Wakefield’s research is not at all reflected by this large number. Many of these citations are a product of the storm of controversy surrounding the work, and are contained within papers which are critical of the methods used. Wakefield’s research has now been robustly discredited, and the paper was retracted by the Lancet in 2010. Nevertheless, this extreme case highlights serious problems with judging a paper or an academic by number of citations. More sophisticated metrics exist. The h-index, first proposed by physicist Jorge Hirsch, tries to account for both the quality and quantity of a scholar’s output in a single number: a researcher who has published n papers, each of which has been cited n times, has an h-index of n. In order to achieve a high h-index, one cannot merely publish a large number of uninteresting papers, or a single extremely significant masterpiece. The h-index is by no means perfect. For example, it does not capture the work of brilliant fledgling academics with a small number of papers. Recent research has examined a variety of alternative measures of scholarly output, “altmetrics”, which use a much wider set of data including article views, downloads, and social media engagement. Some critics argue that metrics based on tweets and likes might emphasise populist, attention-seeking articles over drier, more rigorous work. Despite this controversy, altmetrics offer real advantages for academics. They are typically much more fine-grained, providing a rich profile of the demographic who cite a particular piece of work. This system of open online feedback for academic papers is still in its infancy. Nature journals recently started to provide authors with feedback on page-views and social media engagement, and sites such as Scirate allow Reddit-style voting on pre-print articles. However, traditional peer-reviewed journals and associated metrics such as impact factor, which broadly characterises the prestige associated with a particular journal, retain the hard-earned trust of funding organisations, and their power is likely to persist for some time. Post-publication review Post-publication review is a model with some potential. The idea is to get academics to review a paper after it has been published. This will remove the bottleneck that journals currently put up because editors are involved and peer review has to be done prior to publication. But there are limitations. Academics are never short of opinions in their areas of expertise – it goes with the territory. Yet passing comment publicly on other people’s research can be risky, and negative feedback could provoke a retaliation. Post-publication review also has the potential for bias via preconceived judgements. One researcher may leave harsh comments on another’s research based on the fact they do not like that person: rivalry in academia is not uncommon. Trolling on the web has become a serious problem in recent times, and it is not just the domain of the uneducated, bitter and twisted section, but is also enjoyed by members of society who are supposedly balanced, measured and intelligent. One post-publication review platform, PubPeer, allows anonymous commenting – which, as seen with sites that allow for anonymous posts – could open the door for more trolling and abusive behaviour. It could offer reviewers an extra level of protection from what they say. One researcher recently filed a lawsuit over anonymous comments on PubPeer which they claim caused them to lose their job, after accusations of misconduct in their research. In a similar case, an academic claimed to have lost project funding after a reviewer complained about a blog post they had written about their project. Post-publication comment can also be susceptible to manipulation and bias if not properly moderated. Even then, it is not easy to detect how honest and sincere someone is being over the Web. Recent stories featuring TripAdivisor and the independent health feedback website Patient Opinion show how rating and review systems can come into question. Nevertheless, research can possibly learn something from the likes of Amazon in how a long tail of research discoverability can be created. Comments and reviews may not always truly highlight how good a piece of research is, but they can help create a post-publication dialogue, a connectivity, globally about that topic of research, that in time sparks new ideas and publications. Many now believe that long-standing metrics of academic research – peer review, citation-counting, impact factor – are reaching breaking point. But we are not yet in a position to place complete trust in the alternatives – altmetrics, open science, and post-publication review. But what is clear is that in order to measure the value of new measures of value, we need to try them out at scale.

  • Social media is a ticking time bomb for universities with an outdated web presence.

    The post below was originally published in the LSE Impact of Social Sciences Blog under a CC BY 2.0 Licence A substantial issue at hand in the higher education community is the tricky balancing act academics and their institutions face in managing their traditional websites and the growing number of individual and group Social Media presences. Compared to other large scale organisations, universities have been slow on the uptake of Social Media and are only now realising (partially thanks to their students and a growing requirement to be more open and accountable to fundholders and society) that they need to get out there and be ‘Liked’. Back in the 1990s universities not only understood the importance of having a Web identity but many had the in-house know-how to build them. By the dawn of Web 2.0 in 2005 many had grown to large organisational monsters, unwieldy and hard to navigate. My own institution in Sheffield moved to a content management system that allowed each and every member of staff to create and alter content on their own part of the website. In theory it seemed a good idea, that you could democratise your website, yet in reality it was very uneven. Many of those trained in the technology to update their pages soon forgot how to do the trickier updates whilst others picked up the slack. In time the website grew to tens thousands of pages, every little project, resource and list being compiled online. The ever-decreasing cost of bandwidth and web space afforded institutions that democracy. Yet with it came a price, most pages needed updating and managing and that was all before Social Media started to take off about five years ago. The problem academic institutions now face in this second wave of outward facing content is determining who is behind its creation, what does it say and is it consistent? I’m all in favour of the things that Tim Berner’s Lee gave us from his lab in CERN 25 years ago. Net Neutrality, democracy and the Web 2.0 way of everyone having ownership. It allowed us all to be artists, writers, librarians, journalists, learners and teachers. Nevertheless there is an underlying concern of mine as an information specialist that somewhere down the line we might be heading for a fall. Of course Social Media is very different from traditional Web presences, anyone can set up a profile on Twitter, Facebook, ResearchGate, Google Scholar, Mendeley, LinkedIn, Google+, start a blog, make a YouTube video. Add to that the great possibilities of sharing research thanks to ImpactStory, Figshare and Altmetric.com to name but a few. So with that comes the rub, there is a high number of resources to chose from, not just for the individual, but the academy, and this list will grow and grow. My role is to engage with my colleagues, peers and students to encourage them how to best make use of technology, whether it is to learn, teach or collaborate. It can be very fragmented and being fractious is not something best suited to academics, after all the majority of the really good ones got to where they are by being focused on two or three things and doing it well. Dunbar’s Number informs us people can only maintain 150 stable social relationships, whilst Statistic Brain released data earlier this year saying the average Facebook user has 130 friends. Can the same be applied to technologies that an organisation or academic can successfully manage? I have countless professional Social Media accounts amongst dozens if not hundreds of other, invariably free Web accounts. Yet that is part and parcel of my role, to look and assess these technologies. I saw the promise in Mendeley, Google Apps and Prezi to name but a few, by using them. For academics and their support teams the situation is different, they are focused on either researching, teaching or support first and foremost, often the tools to facilitate this come later on. As we know many academics now acknowledge that Twitter will benefit their networks and knowledge gathering and sharing. Facebook can help engage with students and videos can translate and disseminate research much better than words can in our ever-diminishing attention spans facilitated by the likes of the Journal of Visualised Experiments. Whilst most of the tools I mentioned previously have benefits, it can be a case of horses for courses. For example, if you are a social scientist it is unlikely you would use Mendeley APIs like Plasmid or openSNP. There are several questions academics and organisations will need to address over the next couple of years, that of legacy: who is behind your institutional and project Social Media streams? Are they personal accounts and do you risk having your own HMV Twitter incident. Have they been set up by individuals with good intentions to promote your research output or because they use Facebook a bit? Or are they secretaries and administrators who have been told to set up a Twitter account or Blog and post items as and when? I once heard the previous President of the Chartered Institute of Library and Information Professionals, Phil Bradley announce that; “Web 2.0 is a state of mind”. He wasn’t far wrong, and so is Social Media, in that you have to some extent immerse yourself in it, understand it, understand why you are using it, the pitfalls and benefits and most of all, the opportunities. Social Media, Altmetrics and the now much less mentioned Web 2.0 all afford academia a wealth of possibilities if they take it. What is needed is wide-scale demystification of it all, and systems to help academics dovetail a few choice tools to help bring how they work into a modern setting in the same way MOOCs have made us think about out-dated teaching models. There needs to be greater digital literacy and frameworks to help orchestrate online content in the future. Science Daily reported last year that 90% of the world’s data was generated over the previous two years. The amount of academics engaging with Social Media and Web technologies still remain in the minority, you only have to attend a conference or look for academic discussion on Twitter and Google+ to see this. Tools like Mendeley and ResearchGate are changing that balance as more and more PhD students and early career researchers look to the tools for building networks, but we are still some way from critical mass. Social Media and Altmetrics by their very nature are the more relaxed and informal forms of communication, some of these tools are transient, short-lived or just too niche, so trying to formulate processes for some of them are impractical. There is no silver bullet fix as we are not too sure what is broken yet, but as with the ever growing academic websites of the 1990s and their content there is a risk that the important messages will get lost as we produce even more social data than we can imagine.

bottom of page