BiblioSight News

Integrating the Web of Science web-services API into the Leeds Met Repository

Posts Tagged ‘Bibliosight’

Final Progress Post

Posted by Nick on December 23, 2009

***Updated February 10th 2010****

Title of Primary Project Output:

The Bibliosight desktop application will allow users to specify an approriate query and retrieve bibliographic data as XML from the Web of Science using the recently released (free) WoS API (WS Lite) and convert into a suitable format for repository ingest via SWORD*

*Due to current limitations of WS Lite, the functionality to convert XML output has not been implemented – see this post on Repository News for more details.

Screenshots or diagram of prototype:


Diagram of how returned XML will be mapped onto LOM XML for ingest to intraLibrary (click on the image for full size):


The full bibliosight process (click on the image for full size):

Description of Prototype:

The prototype is a desktop application written in Java that is linked to Thomson Reuters’ WS Lite, an API that allows the Web of Science to be queried by the following fields:

Field Searchable code
Address (including 5 field below) AD
1.  Street SA
2.  City CI
3.  Province/State PS
4.  Zip/postal code ZP
5.  Country CU
Author AU
Conference (including title, location, data, and sponsor) CF
Group Author GP
Organization OG
Sub-organization SG
Source Publication (journal, book or conference) SO
Title TI
Topic TS

Queries may also be specified by date* and the service will support the AND, OR, NOT, and SAME Boolean operators.

*The date on which a record was added to WoS rather than the date of publication. In most cases the year will be the same but there will certainly be some cases where an article published in one year will not have been added to WoS until the following year.

An overview of the application is as follows:

Query options: Query – Allows the user to specify the fields to query in the form Code=(query parameter) and the service does support wild-cards e.g. AD=(leeds met* univ*)

Query options: Date – Allows the user to specify either the date range (inclusive) or retrieve recent updates within the last week/two weeks/four weeks

Query options: Database: DatabaseID – Currently WOS only; in order to ensure the client is as flexible as possible this field is included to accommodate additional Database IDs and it may be possible to plug-in additional databases in the future, for example.

Query options: Database: Editions – These checkboxes reflect the Citation Databases filter within WoS:

  • AHCI – Arts & Humanities Citation Index (1975-present)
  • ISTP – Conference Proceedings Citation Index- Science (1990-present)*
  • SCI – Science Citation Index (1970-present)
  • SSCI – Social Sciences Citation Index (1970-present)

*ISTP reflects code currently used by API – it is not clear why it doesn’t correspond with term now used in WoS which is CPCI-S – Conference Proceedings Citation Index- Science (1990-present)

Retrieve Options: Start Record – Allows user to specify start record to return from all results

Retrieve Options: Maximum records to retrieve – Allows user to specify maximum records to retrieve between 1 and 100 (N.B.  The API is currently restricted to a maximum of 100 records though it can be queried multiple times.)

Retrieve Options: Sort by (Date) (Ascending/Descending) – Allows user to sort records (currently by date only) ascending or descending in date order.

Proxy settings: This is purely for local network setup at Leeds Met and has nothing to do with WoS but will be necessary for users that are behind a proxy server.

View results: View results of current query (as XML)

Save results: Save results of current query

Perform search request: Perform the specified query

Link to working prototype:

There are several issues with distributing a working prototype in that it has a number of dependencies, some of which are specific to the WS Lite service and it is our view that it is less confusing to release the code only, which is available from http://code.google.com/p/bibliosight/

A screen-cast of the working prototype is available here.

Please note that you will require an appropriate subscription to ISI Web of Knowledge; the service requires an authorised IP address and you will also need to register for Thomson Reuters Web of Science® web services programming interface (WS Lite) by agreeing to the Terms & Conditions at http://science.thomsonreuters.com/info/terms-ws/ and completing a registration form – if you have any problems you should contact your Thomson Reuters account manager.

Link to end user documentation:

End user documentation:  https://bibliosightnews.wordpress.com/end-user-documentation/

About the project:  https://bibliosightnews.wordpress.com/about/

For use cases see: https://bibliosightnews.wordpress.com/use-cases/

Link to code repository or API:

The code is available from http://code.google.com/p/bibliosight/

Link to technical documentation:

Technical documentation for WS Lite is available from Thomson Reuters and you should address enquiries to your Thomson Reuters account manager.

The code available from http://code.google.com/p/bibliosight/ is fully commented.

Date prototype was launched:

February 9th 2010 (This is code only, not a  distribution of a working prototype – there is some very basic info in there on what you’d need to get it running.)

A screen-cast of the working prototype is available here.

Project Team Names, Emails and Organisations:

Wendy Luker (Leeds Metropolitan University)      w.luker@leedsmet.ac.uk

Arthur Sargeant (Leeds Metropolitan University)  a.sargeant@leedsmet.ac.uk

Peter Douglas (Intrallact Ltd) p.douglas@intrallect.com

Michael Taylor (Leeds Metropolitan University) m.taylor@leedsmet.ac.uk

Nick Sheppard (Leeds Metropolitan University) n.e.sheppard@leedsmet.ac.uk

Babita Bhogal (Leeds Metropolitan University) b.bhogal@leedsmet.ac.uk

Sue Rooke (Leeds Metropolitan University)  s.rooke@leedsmet.ac.uk

Project Website:

https://bibliosightnews.wordpress.com/

PIMS entry:

https://pims.jisc.ac.uk/projects/view/1389

Table of Content for Project Posts:

  1. First Post
  2. Quickstep into rapid innovation project management
  3. Project meeting number 1:  Draft Agenda
  4. Project meeting – minutes
  5. eurocris
  6. JournalTOCs
  7. SWOT analysis – a digital experiment
  8. Generating use-cases
  9. No one said it would be easy
  10. SWOT update
  11. Project meeting number 2:  Draft Agenda
  12. Use case meeting
  13. 20 second pitch at #jiscri
  14. Project meeting – minutes
  15. Small but important win – we have XML!
  16. Research Excellence Framework:  Second consultation on the assessment and funding of research
  17. JISC Rapid Innovation event at City of Manchester stadium
  18. Quick reminder(s)
  19. Just round the next corner…
  20. Project meeting number 3:  Draft agenda
  21. More on ResearcherID
  22. User participation
  23. Project meeting – minutes
  24. Quick sketch
  25. Visit from Thomson Reuters
  26. Project meeting number 4:  Draft agenda
  27. Project meeting – minutes
  28. Thinking out loud…
  29. Quick sketch #2
  30. Mapping fields from WoS API => LOM
  31. Project meeting number 4:  Draft agenda
  32. The role of standards in Bibliosight
  33. Project meeting – minutes
  34. Web Services Lite
  35. JournalTOCsAPI workshop
  36. Steady as she goes – Bibliosight back on course

Posted in Bibliosight, Final Progress Post, Progress post | Tagged: , , , , , , , , , | 1 Comment »

Steady as she goes – Bibliosight back on course!

Posted by Nick on December 18, 2009

The good ship Bibliosight was due into port at the end of November with the rest of the jiscri fleet, however, as I reported at the time, she found herself in a spot of heavy weather and, after experimenting throughout the project with a more general, unrestricted API, we activated our subscription to Web Services Light only to discover that is a different enough product that it would need another reasonable chunk of time to learn and implement.  I’m pleased to report, however, that Mike has been at the helm night and day, battling manfully through the storm, and has managed to bring us back on course!

After some initial problems dealing with an authentication step and setting up a query in such a way that it actually returned an appropriate XML response, it appears that the structure of the XML returned from WS Lite is actually somewhat better organised than from the general API, and more customisable meaning that for our XML transformation step we can simply create our own XML file in the format that we want such that we can transform without having to worry about the oddities that we were seeing with the general API. Mike initially thought that we could do without the XSLT altogether (i.e. have code to output in the formats we need) but that would reduce the flexibility of the process.

A sample record is reproduced below:

<?xml version=”1.0″ encoding=”UTF-8″?>
<searchResponse>
<!– Number of records in the database/editions selected –>
<numberOfItemsSearched>1000</numberOfItemsSearched>
<!– Number of records that match the query parameters –>
<numberOfItemsFound>1</numberOfItemsFound>
<!– Number of records in the result set –>
<numberOfItemsListed>1</numberOfItemsListed>
<!– Date this file was created (generally would be used to date the query execution time) –>
<dateCreated>2009-12-09T15:30:00Z</dateCreated>
<items>
<item>
<!– Seems to be always present –>
<title>Record title</title>
<!– Seems to be always present –>
<authors count=”3″>
<author>Bloggs, J</author>
<author>Smith, J</author>
<author>Sheppard, N</author>
</authors>
<source>
<!– Not always present –>
<bookSeriesTitle>Book series title</bookSeriesTitle>
<!– Seems to be always present –>
<title>Source title</title>
<!– Not always present –>
<volume>10</volume>
<!– Not always present –>
<issue>1</issue>
<!– Not always present –>
<pages>116-126</pages>
<!– Not always present –>
<published>
<!– Not always present –>
<date>JAN</date>
<!– Seems to be always present –>
<year>2008</year>
</published>
</source>
<!– Not always present –>
<keywords count=”2″>
<keyword>keyword 1</keyword>
<keyword>keyword 2</keyword>
</keywords>
<!– Seems to be always present –>
<ut>000252821700009</ut>
</item>
</items>
<!– This section echoes the query parameters used to generate the results –>
<searchRequest>
<queryParameters>
<databaseId>WOS</databaseId>
<!– These are the only editions we seem to be entitled to –>
<editions count=”4″>
<edition collection=”WOS”>SCI</edition>
<edition collection=”WOS”>SSCI</edition>
<edition collection=”WOS”>AHCI</edition>
<edition collection=”WOS”>ISTP</edition>
</editions>
<!– Symbolic time span can’t be used in conjunction with time span –>
<symbolicTimeSpan>1week</symbolicTimeSpan>
<!– This is a DATABASE time span, not a publication time span –>
<timeSpan>
<begin>2008-01-01</begin>
<end>2008-12-31</end>
</timeSpan>
<!– Language is always ‘en” –>
<userQuery language=”en”>AD=(leeds met* univ*)</userQuery>
</queryParameters>
<retrieveParameters>
<!– Currently this is the only available sort field –>
<fields count=”1″>
<field>
<name>Date</name>
<sort>A</sort>
</field>
</fields>
<!– Max returned records (1 – 100) –>
<count>100</count>
<!– Record offset –>
<firstRecord>1</firstRecord>
</retrieveParameters>
</searchRequest>
</searchResponse>

And here is a diagram of how we expect to map the XML onto LOM XML for ingest to intraLibrary (click on the image for full size):

So far so good, now all we need is a UI:

The UI is not yet coupled to the API but the basic components are now pretty much all in place; Mike has aimed to ensure that the client is as flexible as possible – it will allow users to limit a query  by a specified date range including recent updates and can also accommodate additional Database IDs should it be possible to plug-in additional databases in the future, for example.

Hopefully we will get the boat floating early in the New Year when we will finally be able to do some user testing as well as disseminating the code under an appropriate licence (probably GNU GENERAL PUBLIC LICENSE Version 3 – http://www.gnu.org/copyleft/gpl.html)

Merry Christmas!

Posted in Bibliosight, Progress post | Tagged: , , , | 3 Comments »

JournalTOCsAPI workshop

Posted by Nick on November 26, 2009

On Friday I was invited to participate in a workshop for the JournalTOCsAPI project at Heriot Watt University in Edinburgh.  I didn’t think I was going to make it at all due to the awful flooding in Cumbria and we were told at one point that trains were travelling no further than Carlisle due to the weather and that Scotland was effectively out of bounds – the tracks must have been dry enough, however, and I arrived just in time for Lisa Roger’s introductory presentation “JournalTOCs Workshop – Introduction & Feedback”:

Then came Jenny Delasalle, Repository manager at Warwick University and chair of UKCORR, talking about “Repositories and Alerting Services”:

The third presentation was given by Santy Chumbe, the JournalTOCs Project manager, on behalf of Anne Dixon from the British Geological Survey who helped to test the first use case for the JournalTOCs project:

I was next up presenting on Bibliosight – though it remains to be seen just how relevant this will continue to be as we learn more about WS Lite:

Finally Phil Barker presented on “The Other Side of the Interface” which I found a most engaging re-evaluation of our developing repository/research infrastructure as a complex and dynamic “ecosystem” full of interacting (and evolving) entities and processes:

Thanks to the JournalTOCs team for an enjoyable and informative event, to Jenny and Phil for their presentations and to Helen Muir and Colin Smith (Repository Manager at the Open University) for their insights throughout the day. It was particularly interesting for me to listen to Jenny and Colin discuss their respective practices at WRAP and the ORO – both examples of successful and well established Open Access repositories at major research institutions with much greater numbers of research outputs than Leeds Met – I certainly learnt a great deal about how I might use alerting services, including the JournalTOCsAPI, to alert me to new publications that I can pursue for the OA research repository at Leeds Met and, along with bibliosight and WS Lite I shall aim to integrate some of what I learned into my workflows over the coming months.

Posted in Event, JournalTocs | Tagged: , , | 1 Comment »

Quick sketch #2

Posted by Nick on November 13, 2009

The diagram below is Arthur’s update of my earlier quick sketch to illustrate what Bibliosight will aim to achieve by the formal #jiscri deadline.

It is numbered and colour coded – stages 1 – 3 (shades of blue) are within the #jiscri timeframe; stages 2 (green) & 5 (buff) will require ongoing work beyond the deadline.

(N.B.  Click on the image for a full size view in a separate browser window.)

Bibliosight

Posted in Bibliosight | Tagged: , , , , , , , | 2 Comments »

Project meeting – minutes

Posted by wendyluker on November 11, 2009

Minutes of the Bibliosight Meeting

Tuesday 20th October 2009

1.  Apologies

Nick, Sue, Babita

2.  Minutes of the last meeting, and actions

Actions :

WL /NS to pursue academic contacts for a representative – this has been on-going, but at this stage of the project it seemed unlikely that we would now get a representative.  Academic staff / researchers to be involved in evaluating the outcomes of the project.

PD to clarify upload of XML to intraLibrary including LOM extensions – Peter confirmed that this could be done.

NS/BB/SR to meet with another member of the URO to clarify potential use cases: Wendy reported that Nick had met with Sue Rooke and Sam Armitage, and work had been done on use cases.  Nick would be able to clarify this on his return to work.

PD to contribute blog post on technical standards : on-going.
New action: Wendy to send Peter the required tags for the post.

All team members to contribute to on-going discussion on the blog – reiterated!

3. Update on meeting with Thomson Reuters

Mike updated the group on the meeting with Thomson Reuters.  We have access to the unrestricted API, but we are not entitled to use it to a greater extent than would be provided by the Web Services Lite version.  Even though it appeared that the 100 record limit may not be an issue after all, in fact if we download the initial set of records year by year then this should not present an issue.  Wendy and Arthur reported on some testing of the Web of Science search interface that they had been doing to check whether the ‘Leeds Metropolitan Univ’ search would be sufficiently robust, and it appeared to be so.

We will need to display WofS / Thomson Reuters terms and conditions alongside any material retrieved from WofS.  There is a place in LOM for this.

4. Update on Use Cases

The use cases will be a useful output of the project, and need further work at this stage, e.g. we need to ensure we capture the information around the intended alerting service: at what point will individuals be alerted? Where will the alert come from?
More work also needed on cataloguing workflows, and how we will deal with the initial 1485 items that will be downloaded.

5. API – next steps in the development

Mike updated the group on progress with the API.

At this stage we can:

  • Get records out of WofS
  • Transform them into XML
    Action Nick: what is the LOM XML?
  • Load them into intraLibrary

Mike needed several decisions to be made before he could progress further:

Would the process for downloading be manual or automated? MANUAL

Would the client be desktop or web based: DESKTOP

It was also decided that the XSLT should be easily swapped out so that it can be output in different formats, i.e. to other interfaces, whether they be Endnote, for example, or another repository.  This would be of benefit to the rest of the community.

The group discussed the diagram that Nick had put up on the Blog recently, with regard to the intended scope of the current project, and which tasks might be part of further developments.

Action: Arthur to update the diagram to make it clear what would be achieved by the end of November (encompassing the intended outputs of the original project) and what the future developments might be.

6. Project management tasks: technical standards and value add

The next of the project management tasks to be addressed on the blog would be day to day work.

Action: Nick on his return

Peter would supply a blog on technical standards

Action: Wendy to send Peter the appropriate tags.

7. Other business

There was no other business

8. Date and time of next meeting

The next meeting will be held on Tuesday 17th November, starting at 1pm.
Peter will arrive at approx. 11am for a pre-meeting with Nick (and others) about use cases.

Posted in Bibliosight, SCRUM minutes | Tagged: , , | 1 Comment »

Project meeting number 4: Draft agenda

Posted by wendyluker on October 19, 2009

The agenda for the meeting tomorrow is as follows:

1. Apologies

2. Minutes of the last meeting, and actions

3. Update on meeting with Thomson Reuters

4. Update on Use Cases

5. API: next steps in the development

6. Project management tasks:  Value Add and Technical Standards

7. Any other business

[Lunch]

Posted in Agenda | Tagged: , , | 1 Comment »

Visit from Thomson Reuters

Posted by Nick on October 2, 2009

On Wednesday afternoon Mike and I were finally able to sit down with Jon and…Gareth? (sorry, I’m terrible with names) from Thomson Reuters to discuss Bibliosight and the work we are doing with the WoS API, it probably goes without saying just how useful this was, especially so soon after our Tuesday meeting.

As we have come to appreciate, Thomson are still very much in an ongoing process of developing their suite of tools and commercial services around the extraction of data from WoS using their API and, overall, I was given the impression that the company are currently practising something of a balancing act to weigh their commercial interests against providing appropriate value added services to their subscribers under existing licensing agreement – which is, of course, entirely reasonable.  Jon suggested that the Bibliosight project is something of a pioneer in using this technology and a useful case-study for the company, which certainly puts some of our early difficulties into context – though he did indicate that numerous other folk are also actively investigating the API; in particular he mentioned Queens College Belfast, an institution in Birmingham and R4R at Kings College London in collaboration with EPrints’ Les Carr at Soton.  R4R is the only project that I was hitherto aware of and have had any contact with; it would be really useful if we were able to communicate with others also using the API.

Thomson Reuter’s flagship commercial product is called InCites and “supplies all the data and tools you need to easily produce targeted, customized reports… all in one place. You can conduct in-depth analyses of your institution’s role in research, as well as produce focused snapshots that showcase particular aspects of research performance.” We discussed how, though such a service will be invaluable for the research oriented Russell Group institutions, it is likely to be overkill for a million plus institution like Leeds Met; nevertheless we do require a certain level of functionality to help us analyse our research performance which, alongside our traditional strengths in teaching and learning, is increasingly important, especially in view of the REF.  Hopefully this is where the developing ‘suite of tools’ comes in and our guests were keen to get a handle on precisely what we are hoping to achieve with Bibliosight (aren’t we all!).  I outlined our preliminary use-cases for them as a foundation for our discussion and was also keen to ask some of the specific questions that had arisen during the previous day’s meeting.  First of all I asked about the wording of the documentation that appears to suggest that it is only possible to return 100 records with a single query using the API – they weren’t aware of such an issue and agreed that the way it was expressed in the documentation was a little ambiguous; Jon will follow this up for us though Mike may also be able to elucidate the situation when he has investigated further.  They were able to say that another user had discovered that the API could be called twice every second, however, so didn’t anticipate any problems with extracting all the data we need.

The major issue that came up at the meeting on Tuesday was how best to return all of the articles for a given institution with the most appropriate field to query apparently being the address field.  It is not clear, however, how consistent the institutional address actually is and Jon confirmed that it is derived from information harvested from individual journals/papers which preliminary manual searching of WoS has already demonstrated to be idiosyncratic  – at least in the case of Leeds Metropolitan University and almost certainly other institutions aswell (leeds metropolitan university; leeds met [uni]; lmu etc).  Jon suggested that the safest and most effective method of returning all records would actually be by using ResearcherID though this would require all institutional authors to be registered and an additional paid subscription to ResearcherID download (as opposed to upload which is free) – in lieu of this, however, he did confirm that the address field was the only way and that it may be necessary to build a catch-all query to ensure that we don’t miss anything – precisely how we achieve this is still a little bit of a moot point, though he did indicate that some work has been done on disambiguating institutional address formats within WoS and will follow up on this for us in due course.

Through our discussion, Article Match Retrieval is finally beginning to make more sense to me now, and Jon confirmed that this is the method that would be used in conjunction with the API to provide numbers of citations to an individual article – AMR can be queried by numerous fields including DOI and UT Identifier (A unique identifier for a journal article assigned by Thomson Reuters.); in terms of the current project, I think it makes sense to focus initially on extracting bibliographic data first before worrying about citation metrics; via the API, we can also extract the UT identifier and then use this to query AMR.

We also touched on Terms & Conditions and Thomson, again reasonably, expect WoS as data source to be clearly acknowledged on each individual record – Mike wasn’t initially certain how this could easily be achieved from a technical perspective, at least in the case of bibliographic citation information (which may have been added manually); we have a few ideas on how this could actually be achieved but is really just something to be aware of at this stage.

All in all I now feel that the overall shape project is beginning to be resolved and, in addition to the technical work required to extract, store, parse, convert (XML) records and then pass them somewhere else (intraLibrary/EndNote), a large part of Bibliosight will necessarily focus on developing use-cases for our institutiona research administration which is likely to continue well beyond the designated 6 month life-cycle of the #jiscri project!

Posted in Progress post, Research Excellence Framework, Thomson Reuters Research Analytics | Tagged: , , , , , , , , | 2 Comments »

Project meeting – minutes

Posted by Nick on October 1, 2009

(Date of meeting 29th September 2009)

Present:  Peter Douglas, Wendy Luker, Arthur Sargeant, Mike Taylor, Babita Bhogal, Sue Rooke, Nick Sheppard

1.  Apologies

No apologies

2.  Team membership

Thank you to Sue Rooke who has agreed to join the Bibliosight project team; Sue is a research administrator in the Faculty of Health and has already been involved in repository development, contributing to developing workflows and providing feedback on the Open Search interface.  We hope that Sue will contribute, in particular, to use case development.

The team is still lacking a representative from the academic community and we are currently waiting for a reply to recent correspondence. WL is attending the research sub-committee on Monday 5th October and may raise the issue there if necessary.

Action:  WL/NS to pursue academic contact(s) for a representative to sit on the project team

3.  Progress since last meeting

• API

We have now received the updated documentation from Thomson Reuters and Mike has submitted a query to the API  and received an appropriate response in XML. Thomson Reuters’ FAQ gives a full summary of the data fields that can be queried by the service and the data elements that can be returned which appears to be in line with this XML response.

We are therefore able to formally reduce the associated risk back to low:

Risk Probability Impact Action to Prevent/Manage Risk
API unsuitable for project deliverables Low (elevated to Medium;1stSeptember 2009 – reduced back to Low; 29th September 2009) High Feedback from Thomson Reuters indicates proposal technically feasible.

Problems with API/documentation have been mitigated by release of new documentation from Thomson Reuters; 29th September 2009)

N.B.  The wording of the documentation appears to suggest that it is only possible to return 100 records with a single query using the API – NS to clarify with Thomson Reuters.  If this is the case, the practical implications  are limited in the case of Leeds Metropolitan University which publishes a relatively small amount of research but would be considerable for an institution with a greater research output.

Action:  NS to clarify 100 record limit with TR

Action:  MT to continue appropriate* implementation of API

* Hopefully what is “appropriate” will evolve over the coming weeks!

• Use cases

Technical difficulties have contributed to a lack of conceptual clarity amongst the project team and there was considerable discussion around precisely what data Bibliosight will now seek to retrieve from WoS using the API and what we will aim to achieve with that data.

The original use case narratives outlined in the bid were several and focussed on an alert service for researchers and/or repository administrators to encourage the deposit of an appropriate full text in the repository and perhaps neglected the obvious administrative use case whereby metadata from WoS is pulled directly into intraLibrary.

N.B.  An important use case was also the extraction of citation metrics that would potentially inform the REF – we are not yet clear how this would be achieved but we understand it will rely on the Article Match Retrieval service.

Of course we also want to produce outputs that are of use to the wider community rather than just to users of our specific repository software and this reflects the considerations of the Readiness for REF project which also hopes to enable UK repositories to make effective and efficient use of the WoS API (as part of a much broader project) and is focussing on EPrints, DSpace and Fedora as the most well established OA research repository platforms.  R4R raises several pertinant questions, many of which also arose independently and in a similar form during our own discussion:

  • What are the different workflows relevant to (i) backfilling a repository with a one-off download and (ii) ongoing use of WoSAPI to populate a repository?
  • What uses might records downloaded from WoSAPI be put to?
  • How might the workflows be designed to enable other datastreams also to help populate the repository (eg from UK PubMedCentral, arXiv, or sources that better serve the arts, humanities and social sciences)?
  • What workflows might be able to handle facts such as that the WoS record will become available some time after the paper is published, whereas deposit into the repository may happen earlier than that?
  • What methods might be helpful in addressing the inevitable questions of duplicate records, or ambiguous relations with existing records?
  • Are there implications for a repository’s mission and reputation if the balance of content it holds is rapidly changed by a large number of WoS-derived records?

Use cases may also be informed by the JournalTOCsAPI project (see item 5 below) who also explored similar issues in a recent post.

One  practical consideration from a technical perspective and that will have a bearing on developing use cases is the best method of extracting comprehensive records from institution “X” – the most appropriate field to query seems to be the address field but it is not clear how consistent the institutional address in this field will be – for example, early experimentation has found that “leeds metropolitan university” only returns 201 records; using a wildcard in the form “leeds met*”, however, returns 1503 records (test conducted 29th September 2009).  This was an issue flagged to follow up with Thomson Reuters reps on Wednesday 30th September (see item 4; post to follow).

In terms of the practicalities of actually getting records from WoS into intraLibrary once they have been harvested, Peter did indicate that it should be possible to upload suitable XML records into intraLibrary though this will need to be in LOM format, meaning that we may need to perform an XSLT transformation to convert data retrieved from WoS into a suitable format.  Also, Peter is uncertain whether XML that can be imported in this way will also include the LOM extensions we are using to accommodate bibliographic information and will need to speak to his technical colleagues at Intrallect to clarify.

Note:  There was also discussion around appropriate integration with SFX, our OpenURL resolver, as a possible means of identifying a published URL for WoS records – this is an area that has scope implications both for Bibliosight and the remit of the Leeds Metropolitan University repository itself; beyond an Open Access repository of research (i.e. to also comprise citation only records).  This is an area that may need to be explored in more detail later in the project.

Action:  PD to clarify re upload of XML to intraLibrary including LOM extensions

Action:  NS/BB/SR to meet with another member of the URO to clarify potential use cases (meeting on Thursday 1st October)

Action:  All team members to contribute to ongoing discussion on the blog.

• Project reporting – blog; tags specified by JISC

It was agreed that the specific subject for blog posts this month will be ‘Technical standards’ – Peter agreed to contribute a post before the next meeting.

Action: PD to contribute a blog post on ‘technical standards’.

Action: All team members to contribute to ongoing discussion on the blog.

4.  Visit by Thomson Reuters reps on Wednesday 30th September

Mike and I met with Jon and Gareth from TR on Wednesday 30th (yesterday) who were able to clarify several issues for us – separate post to follow

5. Review of JournalTOCsAPI – http://www.journaltocs.hw.ac.uk/index.php?action=api

During the meeting, I gave a quick overview of the recently released JournalTOCsAPI at http://www.journaltocs.hw.ac.uk/index.php?action=api with a view to de-mysifying the concept of an API for the less technical amongst us and also potentially giving the more technical a developmental steer.  Currently, queries need to be submitted to the API by URL and are returned as an RSS feed which includes as much metadata as in the original TOC feed – depending on the quality of the original record – comparable to Bibliosight in many respects, this project perhaps has greater flexibility regarding the metadata it is able to query and return – it is, after all, building an API from the ground up that will query an openly accessible data source – however, it is likely that the quality of the data may not be as consistent as WoS; there may be fields missing, for example.

It has also been informative to engage with another, similar project as a ‘user’ and we discussed how Bibliosight might also engage with JournalTOCsAPI community of users and agreed that it is a valuable opportunity to solicit the opinion of repository managers from other institutions using different software platforms.

Action:  NS to continue engaging with JournalTOCsAPI as a ‘user’

Action:  NS to send an email that can be forwarded to JournalTOCsAPI community of users as suggested in recent correspondence from Lisa Rogers

6.  Article Match Retrieval & Researcher ID

These were only touched upon briefly in the meeting and flagged to follow up with Thomson Reuters reps on Wednesday 30th September (see item 4; post to follow).

7.  A.O.B.

None

8.  Date of next meeting

20th October 2009 – 11:30 am

Posted in Bibliosight, Progress post, SCRUM minutes | Tagged: , , , , , , , , , | 2 Comments »

More on ResearcherID

Posted by Nick on September 29, 2009

A quick search of my blog feeds turned up surprisingly little on ResearcherID – just 8 posts from my fairly populous repository oriented RSS aggregation, all from 2008, they include this post from June 2008 on the Wrap Repository blog which emphasises that “it would be easier for the author if there were a universal unique identifier that could help us all to share information about the author in a more automated way” and a post on the principles of citation-based evaluation from Overdue Ideas in which @ostephens summarises a session by James Pringle from Thomson Reuters and voices concern about how the work being done by Thomson Reuters “joins up with activity in the sector, and by other organisations. How does ‘ResearcherID.com’ link to OCLC Identities work? It would be great to see some joined up thinking across the library/information sector on this, as otherwise we will end up with multiple methods of identification.”

So it seems that ResearcherID received a flurry of attention when is was released back in 2008 but is still just one potential solution to an ongoing issue – as I noted in a recent post, Open Research Online is using a unique University ID in their EPrints repository (though I need to do more reading, other solutions mooted in the blogosphere seem to be OpenID and OCLC identities.)  I also searched http://www.researcherid.com/ for “Leeds Metropolitan University” and found just 4 of our researchers in the database…

Nevertheless, in terms of the of the Bibliosight project, and the wider context of the Leeds Met repository, ResearcherID could well be an appropriate solution, and is certainly worth exploring further with the project team and with the URO…just a very quick note on practicalities; batch upload to ResearcherID would require us to prepare a detailed XML document which, to my mark-up phobic eye, looks decidedly none trivial – it would need to comprise records for all leeds Met researchers of course.  This is an example (view in IE or FF; Chrome will interpret the XML rather than show mark-up)

Posted in ResearcherID | Tagged: , , , | 3 Comments »

Quick reminder(s)

Posted by Nick on September 24, 2009

Just a quick reminder to the project team that we should be regularly posting in the 6 subject areas specified by JISC:  Project SWOT analyses; User participation; Day to day work; Technical standards; Value add; Small wins and fails; Progress report.

Thanks to Mike for the recent Small win post (actually a rather big win!).

I’m currently putting together the agenda for next Tuesday’s meeting – I’ll post here and email a copy later today.

A reminder also that we will be joined by a new team member at the meeting – Sue is a research administrator in the Faculty of Health and has already been involved in repository development – testing for us and providing feedback on the developing infrastructure; she will have some good perspectives on institutional and faculty research administration and should be able to contribute to our use cases.

Finally, Jon Stroll, our rep from Thomson Reuters,  is visiting us on Wednesday 30th – he will have a technical colleague with him and should be able to give us a good steer on the project – this will be an item on the agenda.

Posted in Progress post | Tagged: , , , , | 1 Comment »