More Library Mashups now published #mashlib

Standard

Nicole Engard has published a new edition of her Library Mashups book (More Library Mashups). It includes chapters on tools people can use to create data mashups for libraries and information services, as well as examples of a wide range of actual library data mashups and details about how they were created.

The full run-down of the chapters appear below, so you can get an idea of what is covered. I’ll include a disclaimer here and say I’m fortunate to have a chapter about ifttt.com included in the book too. In fact, it’s also included as a free sample chapter.

  • IFTTT Makes Data Play Easy (Gary Green)
  • The Non-Developer’s Guide to Creating Map Mashups (Eva Dodsworth)
  • OpenRefine(ing) and Visualizing Library Data (Martin Hawksey)
  • Umlaut: Mashing Up Delivery and Access (Jonathan Rochkind)
  • Building a Better Library Calendar With Drupal and Evanced Events (Kara Reuter and Stefan Langer)
  • An API of APIs: A Content Silo Mashup for Library Websites (Sean Hannan)
  • Curating API Feeds to Display Open Library Book Covers in Subject Guides (Rowena McKernan)
  • Searching Library Databases Through Twitter (Bianca Kramer)
  • Putting Library Catalog Data on the Map (Natalie Pollecutt)
  • Mashups and Next Generation Catalog at Work (Anne-Lena Westrum and Asgeir Rekkavik)
  • Delivering Catalog Records Using Wikipedia Current Awareness (Natalie Pollecutt)
  • Mashups and Next Generation Catalog at Work (Anne-Lena Westrum and Asgeir Rekkavik)
  • Delivering Catalog Records Using Wikipedia Current Awareness (Natalie Pollecutt)
  • Telling Stories With Google Maps Mashups (Olga Buchel)
  • Visualizing a Collection Using Interactive Maps (Francine Berish and Sarah Simpkin)
  • Creating Computer Availability Maps (Scott Bacon)
  • Getting Digi With It: Using TimelineJS to Transform Digital Archival Collections (Jeanette Claire Sewell)
  • BookMeUp: Using HTML5, Web Services, and Location-Based Browsing to Build a Book Suggestion App (Jason Clark)
  • Stanford’s SearchWorks: Mashup Discovery for Library Collections (Bess Sadler)
  • Libki and Koha: Leveraging Open Source Software for Single Sign-on Integration (Kyle M. Hall)
  • Disassembling the ILS: Using MarcEdit and Koha to Leverage System APIs to Develop Custom Workflows (Terry Reese)
  • Mashing Up Information to Stay on Top of News (Celine Kelly)
  • A Mashup in One Week: The Process Behind Serendip-o-matic (Meghan Frazer)

I’m looking froward to receiving my copy and I’m sure I’ll be reporting back on some of the ideas featured in it.

Spend Love Index: Idea for National Hack the Government event #NHTG14

Standard

This weekend Rewired State are running a National Hack the Government event around the UK. I won’t be attending, but I thought I’d submit an idea that those attending might want to work on.

I called it the Spend Quality Index, and the idea is to see if a council’s spend on a service is proportional to the social media love it receives in response to that service?

Steps involved could be:

  1. Take the budget figures for a specific council service (eg Fakeshire Council Library Service).
  2. Collect all mentions of Fakeshire’s Library Service across various social media channels, extracting the user sentiment ie happy; unhappy; angry.
  3. Do the same for all Library Services across England.
  4. Produce a sliding scale of happiness/satisfaction with the services based on funding & sentiment.

Budget figures could be taken from CIPFA annual library stats and sentiment analysis APIs could be used.

Disclaimers

I know this isn’t a scientific approach and I don’t expect the results to be taken seriously – it’s about looking at things in a different way.

I chose libraries because that’s the sector I work in, and it isn’t me pointing fingers at library services who have made cuts.

Popular Bookmarks Yahoo Pipes Search Experiment #MashLib

Standard

A while ago I experimented with Yahoo Pipes to put together a search tool that aggregates links everyone has saved to social bookmarking sites Digg, Pinboard and Delicious and returns the most popular recent sites based on a simple keyword search. NB: I’m not talking about only the bookmarks I’ve saved, but all bookmarks saved by the communities on these sites.

So, if you enter the phrase “technology” you might get the following results list:

http://www.nytimes.com [13]

http://www.theatlantic.com [13]

http://www.theverge.com [9]

http://www.youtube.com [7]

…etc

The results are displayed in popularity order and the number in square brackets indicates the number of times anyone has bookmarked the site recently on Digg, Delicious or Pinboard. Each of the sites that appear in the results list also act as a clickable link to that site.

As it’s been created in Yahoo Pipes you can also get a variety of useful data formats as output, including RSS, JSON and PHP.

I decided to put it together as a way of discovering new sites, based upon sites other people had recently found useful. It’s doesn’t currently provide a comprehensive list of sites, but it does offer an alternative way of discovering sites that I might not have been returned by big name search engines.

It’s something I’d like to develop, but had forgotten about it until @AgentK23 mentioned something to me recently about collaborative bookmarking.

How I’d like to develop it…

  • Include as many social bookmarking sites as possible as part of the aggregation process to improve the comprehensiveness of the search results. The 3 mentioned are ones that I could easily generate a hackable and useful search/result query url for. For example, I couldn’t do anything useful with Diigo bookmarks, as it limits the results of community RSS feeds to 20 items (Edit: See positive update at foot of blog post). I’d be happy to receive suggestions about other social bookmarking sites I could tap into in this way.
  • The clickable links to the websites mentioned in the search results currently just go to the home page of those sites, but I’d like to work out a way to go directly to relevant articles on the site instead. Because different websites have different search query structures I couldn’t turn the links into ones that just focus on the search keyword that had been entered. For example, the New York Times link for the “technology” search mentioned earlier goes to www.nytimes.com , not http://query.nytimes.com/search/sitesearch/#/technology
  • Yahoo Pipes is a useful tool to try out ideas like this, but I’m still not sure about its reliability. So, I should think about developing this without relying on Yahoo Pipes.

Here’s the link to it if you want to try it out. Any feedback would be appreciated… and remember, it’s just an experiment and not a commercial product.

As most search tools have a daft name I thought I’d call it “DiPiDel POP!” – An abbreviation of Digg, Pinboard, Delicious Popular. 🙂

Update: Thanks to Marjolein Hoekstra who followed up on this post and got in touch with Diigo about my issue. They have now extended the RSS feed to 100 items, which is very responsive of them and great news too, as I can now use the site as an aggregation source. As well as including Diigo in the aggregation process, I’ve also now included Blogmarks and Bibsonomy. Thanks to Marjolein for suggesting them too.

Try These ifttt Alternatives

Standard

If you find ifttt useful you might want to take a look at these services too.

Zapier
WeWiredWeb
Elastic.io
Cloudwork

It might be that you like the look of ifttt, but it doesn’t quite suit your needs or the way you work, or it doesn’t connect channels that you use. If that’s the case maybe one of these services will suit you instead.

Out of the five, I’d say Zapier and WeWiredWeb were the most similar to ifttt. Zapier appears to be able to connect the most channels.

List of Library and Book APIs on Programmable Web #mashlib

Standard

List of Library and Book APIs on Programmable Web #mashlib

Programmable Web have published an article about the library and book APIs/mashups listed on their site – 49 APIs in total are listed. It gives details of what each of the APIs do and the data formats and communication protocols they use. Handy information for the Mashed Library community.

Dapper.net: How To Make Feeds From Web Pages That Really Don’t Want You To

Standard

If I ever want to put together a mashup or just tinker with data on the web my first port of call is Yahoo pipes. However, even though I really like pipes, it frustrates me a fair amount of the time too. Sometimes it behaves erratically and I get a sulk on with it. So, I decided to have a scout around  to look for other ways of achieving what I want.

My first great find is Dapper. I imagine this is old hat to some people, as it’s been around for a few years. It’s actually owned by Yahoo too. As the site itself says…

Dapper is a tool that enables users to create update feeds for their favorite sites and website owners to optimize and distribute their content in new ways.

It doesn’t do the same thing as Yahoo pipes, but is extremely handy for pulling out data from web pages where a feed doesn’t exist, and it provides the output in the following formats (if it’s relevant to the data on the page) – XML, RSS, HTML, Google Gadget, Google Map, Image Loop, iCalendar, ATOM, CSV, JSON, XSL, YAML. I’m not going to pretend that I know what all of the feeds are, but they seem like a fairly handy group of feeds to be able to use.

I thought I’d see if I could create an RSS feed for our library catalogue. I’ve always wanted an RSS for it (so we can feed stock information through to different places easily) and I’ve also wanted a way to produce alerts for new titles (so users can be informed about any new stock they may be interested in), but our library catalogue neither. But now, using Dapper, I can do both easily.

Dapp Factory screen capture

To achieve this Dapper asks you to:

  1. Provide URL’s of web pages your data appears in. You just need to provide sample pages here. I gave it URLs of catalogue search results pages.
  2. Highlight samples of the data on these pages that you want in your feed. I highlighted fields containing Title, Author, Format (eg Hardback, DVD, etc), Book cover, Number of copies and then told Dapper what to call these fields.
  3. Group together data fields – this effectively puts related data together in a single record. If you don’t do this you end up with a list of unrelated data items in your RSS feed, rather than a list of ready formed records.
  4. Identify any portion of the url that can be changed by the user to create a brand new search using that resource. For example, in my url I changed “_TitleResults.aspx?page=1&searchTerm=cake&searchType=99&searchTerm2=&media=&br” to “_TitleResults.aspx?page=1&searchTerm={Query}&searchType=99&searchTerm2=&media=&br”, so I could easily create a new feed for a search for any other keyword without having to go through the whole process again.
  5. Choose the output format of the feed eg RSS, ATOM, HTML, iCalendar, etc (as mentioned earlier). You can also say which fields you want to appear in the output feed.

In response to this Dapper gives you a unique URL for your feed.

From this stage you can also:

  1. Change the query text, as mentioned in (4) and get its own unique URL for this new feed.
  2. Set up a service using the feed you created. Here you can make it public and allow others to create their own searches by changing the query text. This is the service I created. I also created a Google Gadget and added it to my iGoogle page.
  3. Set up an email alert for your feed. So, if a new item is added to the feed (eg a new book comes in stock matching your search query) it will send you an email notification.

I’ve only been tinkering with it for a few hours, but it looks like it’s going to come in handy for pulling out and re-using data in web pages that has in the past been difficult for me to get at. 🙂