Using the API Without Programming
For Twitter methods that require no authentication, you can invoke a method and read the returned information with any
tool that can send a URL to an HTTP server and read the returned text—even with a web browser. For example, let's say you want to use your web browser to make an API call that asks the Twitter server about messages that mention the RDF query language SPARQL, and you want the results in ATOM. You could enter this URL into your browser without being logged into Twitter:
A web browser isn't the only application that can do something with a URL. Command-line tools that typically read the
name of a local file to process can often read a URL pointing to a remote file as well, as long as the URL contains no
characters that might confuse your command line. (Quoting the URL helps.) For example, you can pass a URL instead of a
filename to most XSLT processors. The following code (shown here as two lines, but enter it as one) tells the XSLT
processor Saxon to run the stylesheet twitteratom2txt.xsl
on the URL passed as the first argument, which is a call to a search method of the Twitter API:
The brief twitteratom2txt.xsl
stylesheet, shown below, converts the Atom representation of
Twitter messages to simple text by outputting the message author name, a colon and a space, and the Twitter
message (available in both the title
elements—I used the
because the content
version has extra markup to identify the search term), and two carriage returns.
Here is a bit of the output when using this to search for the term SPARQL with the command line shown above:
padacek (Padacek): http://keg.vse.cz/ - management pro
foaf:Group/Organization,foaf:Document, vyhledavani resourcu bez popisu
a trenink SPARQL dotazu.
yokofakun (Pierre Lindenbaum): Playing with sparql/myexperiment :-)
SemanticBot (SemanticBot): #SemanticDelicious : RDF and SPARQL: Using
Semantic Web Technology to Integrate t.. http://tinyurl.com/69srfu
pbacgrad (Luis Casillas): @benbinary Nice. Have you done any work with
SPARQL? I've read a couple posts but am looking for some actual
JeniT (Jeni Tennison): @smyles That seems to be the one, although I'm
not sure it's helpful in my case. More oriented to SPARQL back ends,
and def. not RDFa!
The stylesheet could create fancier output, but for such a short stylesheet, it's a pretty useful output. XSLT processors are just one example of text-processing tools that can accept URL references.
The wget and
cURL utilities are both part of the standard Linux distribution, and free Windows versions are easy to find. These tools let you invoke many Twitter API methods from your command line, and because they let you authenticate your call by specifying a Twitter ID and password, you have a wider range of API calls to choose from than browser or XSLT command line invocations allow. The following shows how to use each utility to retrieve XML versions of the 20 most recent messages from twitguy's friends (the wget command is split onto two lines for display here, but should be entered as one line):
curl -u twitguy:tgpw http://twitter.com/statuses/friends_timeline.xml
wget --http-user=twitguy --http-passwd=
cURL improves on wget by letting you send an HTTP POST message, which is necessary for Twitter API calls such as the update
method that posts a tweet to your account. The following command (split for display here, but to be entered as one line when you try it with your own ID) tweets a classic message to twitguy's account:
Programming Languages and the Twitter API
curl -u twitguy:tgpw -d status="having another cup of coffee"
For most popular programming languages, you can find a Twitter API library that lets you send and receive tweets and
other Twitter-related information using the syntax and data structures of those programming languages. At this
Google group lists one API library for each of the Cocoa and Flash/ActionScript languages, two for .NET and Python,
three each for Ruby, Perl, and Java, and five for PHP. Twitter's
Twitter Libraries page lists most of these plus
additional libraries for C++ and PL/SQL.
If you can take advantage of all of Twitter's power from the free cURL utility, why use the extra syntax of a programming language? I looked into it for the same reason that I usually learn any new programming language: because after others have done the hard work of writing libraries to perform specific sets of tasks, I can write a bit of code to patch together calls to a set of these libraries and combine their power into something greater than the sum of their parts.
For example, even though I could use cURL to retrieve information about stock quotes, I couldn't parse out individual pieces of information, take action with that information based on the retrieved values, and assemble the information into complete English sentences that I then publish on Twitter. With the python-twitter library, this was remarkably easy. Next month, in part 2 of this article, I'll show you just how easy.