A Good Review II

Here’s another nice review, written for me by Andrew Moore. To the general public, Andrew may be known for recent features such as Evidently Cochrane’s “Paracetamol: widely used and largely ineffective” and (with Nicholas Moore) the European Journal of Hospital Pharmacy‘s “Paracetamol and pain: the kiloton problem”. But these are just the tip of a research iceberg representing more than 40 years, 500 scientific and clinical publications, 200 systematic reviews, 100 Cochrane reviews, and a number of books on evidence-based medicine and pain. I was introduced to Andrew at the Oxford Pain Research Group in 2008, and have since helped him with many data analyses; and with his colleague Sebastian Straube, wrote about this work for Dr Dobbs in our post “Trials and tribulations: measuring drug efficacy in clinical trials, plotting graphs in Java with gnuplot, and reading Excel with JExcelAPI”. That was in Java, but I’ve done our more recent work in R, because of its conciseness and the huge number of library functions available for reading, reformatting, analysing and writing such data. And I’ve also hosted his Bandolier evidence-based medicine website. The text below is Andrew’s.

The spreadsheet is a terrific boon for science and medicine. It allows huge amounts of information to be processed and analysed.

And that is fine when you are following a well known process, down a road well-travelled.

But the cutting edge of science and medicine is, by definition, off that road. Being at the front involves asking awkward questions — those for which there are no answers or processes.

Now large spreadsheets can be the barrier, because transforming them from something designed for one purpose into something useful for a different purpose is hard and fraught with potential error.

That’s where Jocelyn can help — helping researchers make better use of the tools they have to answer questions they didn’t think they could answer.

The three examples below come from clinical trials in acute and chronic pain, where analysis at the level of the individual patient allowed better insights into trial design and patient benefit.

The following papers used or were inspired by Jocelyn’s data analyses:

  • Validating speed of onset as a key component of good analgesic response in acute pain.
    Moore RA, Derry S, Straube S, Ireson-Paine J, Wiffen PJ.
    Eur J Pain. 2015 Feb;19(2):187-92. doi: 10.1002/ejp.536.

  • Faster, higher, stronger? Evidence for formulation and efficacy for ibuprofen in acute pain.
    Moore RA, Derry S, Straube S, Ireson-Paine J, Wiffen PJ.
    Pain. 2014 Jan;155(1):14-21. doi: 10.1016/j.pain.2013.08.013

  • Interference with work in fibromyalgia: effect of treatment with pregabalin and relation to pain response.
    Straube S, Moore RA, Paine J, Derry S, Phillips CJ, Hallier E, McQuay HJ.
    BMC Musculoskelet Disord. 2011 Jun 3;12:125. doi: 10.1186/1471-2474-12-125.

  • Minimum efficacy criteria for comparisons between treatments using individual patient meta-analysis of acute pain trials: examples of etoricoxib, paracetamol, ibuprofen, and ibuprofen/paracetamol combinations after third molar extraction.
    Moore RA, Straube S, Paine J, Derry S, McQuay HJ.
    Pain. 2011 May;152(5):982-9. doi: 10.1016/j.pain.2010.11.030.

  • Pregabalin in fibromyalgia–responder analysis from individual patient data.
    Straube S, Derry S, Moore RA, Paine J, McQuay HJ.
    BMC Musculoskelet Disord. 2010 Jul 5;11:150. doi: 10.1186/1471-2474-11-150.

  • Fibromyalgia: Moderate and substantial pain intensity reduction predicts improvement in other outcomes and substantial quality of life gain.
    Moore RA, Straube S, Paine J, Phillips CJ, Derry S, McQuay HJ.
    Pain. 2010 May;149(2):360-4. doi: 10.1016/j.pain.2010.02.039.

How to List Blog Posts from outside WordPress

On my website, I’ve got two kinds of page. One kind is like my home page: coded directly as HTML. These pages are static, in that they are files which never change unless I edit them. The other kind of page belongs to this blog. These pages are implemented in WordPress, and are dynamic. When your browser asks for a WordPress page, it sends a web address to my server. The server looks for a PHP script at that address and runs it, and the script decides what HTML to send there and then, based on the contents of WordPress’s database. A good example is the page at http://www.j-paine.org/blog/ which lists my blog posts. But what should I do if I want to list these posts outside WordPress, for example on my home page? There’s an answer at “How to display recent posts outside WordPress” by Paul Green.

It’s the same kind of problem that I solved in “How to Run PHP under WordPress with Justyn’s Magic Includer”. There, I needed to stand outside WordPress and run a script that added information to its database about the names and times and venues of a teacher’s classes, so that they could be displayed by the Promenade theme. Here, I need to stand outside it and run a script that loops through the database returning the text of each and every blog post. In both cases, the scripts need to know where to find the WordPress functions they must call to do the job. In terms of the analogy I used in my Justyn’s Magic Includer post, I need to tell my scripts that to find the WordPress tools, they’ll have to rummage around behind that pile of motorbike spares at the back of my garage.

Here’s a demonstration. The script is below, a shortened version of the one in Paul Green’s post, and also similar to the “Standard Loop” examples in “Class Reference/WP Query” from the authoritative WordPress Codex. You can see what its output looks like by going to http://www.j-paine.org/blog/demos/posts_demo.php.

<?php

/* posts_demo.php */

/*
A simple script that demonstrates
looping through blog posts and
displaying each one.
*/

require( $_SERVER['DOCUMENT_ROOT'] . '/blog/wp-load.php' ); 

$args = array( 'posts_per_page' => -1 );
$latest_posts = new WP_Query( $args ); 	

while ( $latest_posts->have_posts() ) {
  $latest_posts->the_post(); 
  the_title(); 
  echo "<BR>\n";
  the_time( 'l jS F, Y' ); 
  echo "<BR>\n";
  the_excerpt(); 
  echo "<BR><BR>\n";
}

wp_reset_postdata();

?>

(Continued.)

A Good Review

Here’s a very nice review one of my customers sent. His site is still confidential, so I can’t show it here, but I can say that the WordPress theme he was talking about is a premium theme that works with WP Job Manager. The rest of the text below is his.

Like the majority of business owners, the idea of creating a website can be like stepping off a plane in a busy unfamiliar city where no one speaks your language. You have to trust that someone will understand your basic attempts at communication or you’ll end up spending money on something that you didn’t ask for. Having had a bad experience with a web developer in the past, I decided to purchase a WordPress template so that I could chose the basic functions, look and feel of the site from the outset.

I wrote a brief for Jocelyn in layman’s terms and I was very impressed to see an email in my inbox with a decoded brief from Jocelyn with questions regarding how to tackle some of the shortcoming of the template.

The template needed quite extensive editing in areas and it was clear that Jocelyn had understood exactly what we needed from the site. The users would need to spend time entering information into a database and it was crucial that this process was easy and simple to navigate. Jocelyn’s interpretation of our brief was excellent. Jocelyn integrated a number of plugins, which prevented the need to code custom scripts saving me money. Jocelyn also created a login feature which prevents the website from loading until a registered username and password had been entered.

Jocelyn integrated our logos, colour scheme, graphics and text in a very attractive and well-considered manner linking nicely with the overall look and feel of the site. He also helped us move our domains to our host server, uploaded the site and thoroughly tested the site before he handed it over. I have been very impressed with the time Jocelyn takes to clearly explain what he is doing and when he encounters a problem and needs input from myself. We have asked Jocelyn to manage our site for us which is testament to our trust and confidence in his ability.

Thanks again Jocelyn!

How to Remove Mumbo Jumbo from Google’s Search Result Links

Here’s a useful website for overcoming a defect in Google: IndustryStandardSoftware.com’s Google Result Link Converter. It solves the problem that copying links from a page of search results is harder than it needs to be. For example, here are the first few results for “Oxford world class city”.
Screenshot of Google search results for 'Oxford world class city' These are long links, and Google has shortened the link text below the titles by replacing parts of it with ellipses. So copying and pasting this text won’t work.

Most people would probably mouse over the title, right-click, and select “Copy Link Location” (or the equivalent in whichever browser they’re using) from the resulting menu:
Screenshot of Google search results for 'Oxford world class city' plus link-handling menu
But when you do that, you get a bloated monster full of Google mumbo jumbo:
https://www.google.co.uk/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=0ahUKEwjDyfDO6ozSAhUsCsAKHdRXCMkQFggiMAE&url=http%3A%2F%2Fmycouncil.oxford.gov.uk%2Fdocuments%2Fs28404%2FAppendix%25201%2520Draft%2520Corporate%2520Plan%25202016%2520-%25202020%2520updated.pdf&usg=AFQjCNFqf7fbdKtAGIxuwmhpXWx1I7D-Jg

Another method, which works most of the time on my system, is to open the link in a browser and copy from the browser’s address bar. The Google link redirects the browser to the original page, and its URL then appears in the address bar. But that doesn’t always work if the file is a PDF. When I tried it on the results above, it did with the second search result, but not with the first. Instead of displaying the page, the browser invited me to either download or open it with a program. When I did the latter, it opened in a PDF viewer which displayed the URL as a title, but didn’t give me a way to copy it.

But, yesterday I searched about this problem and found IndustryStandardSoftware.com’s link converter:
Screenshot of IndustryStandardSoftware.com Google Result Link Converter
And it works. I was able to paste those monstrous PDF links into it and get back their original URLs.

Johnnie Walker’s blog Eat My Business has a post about this: Online Tools to Convert those Long Google Search Engine Results Links Back to Normal Links. As well as the Link Converter, he references a Stack Exchange discussion about the problem, and a script for the Greasemonkey add-on for Firefox. This is probably out of date though, looking at the comments about it. He also says that the links appear when you’re logged in to a Google account, but for me, they happen all the time.