Login | Register   
LinkedIn
Google+
Twitter
RSS Feed
Download our iPhone app
TODAY'S HEADLINES  |   ARTICLE ARCHIVE  |   FORUMS  |   TIP BANK
Browse DevX
Sign up for e-mail newsletters from DevX


advertisement
 

An Interview with Evan Prodromou, the Developer Behind the Open Source Twitter Clone

The author of Laconica, an open source tool that lets anyone set up their own Twitter clone, discusses the technical challenges of microblogging and why it's not a fad.


advertisement
Programmer and self-described internet entrepreneur Evan Prodromou has been involved in starting open source and open content projects. His best-known is Wikitravel, a Wiki site for collaboratively edited travel guides.

Indenti.ca, his current project, is his attempt to develop a free network service using shared, open data. But to the uninitiated, the site and service look—and function very much—like a clone of Twitter. The big difference is that the software running Indenti.ca, which Prodromou has also been developing, Laconica, is free and open source. People can copy the code and use it on their own servers.

Originally from San Francisco, Prodromou is now based in Montreal where he has established a company, Control Yourself, to turn Indenti.ca into a business.



Howard Wen: What term would you use to describe this communication architecture—the Twitter or Facebook-style "user status update" system?

Evan Prodromou: If it [were] up to me, I would probably choose something like "short message hub" or "universal messaging hub." But the name that's really sticking with this kind of service is "microblogging." I think that comes out of the fact that a lot of [these] services have a web interface and it looks like a blog. I don't think it's a fully accurate term, but it's good enough, and it gets the idea across.

HW: Can you sum up how the architecture for microblogging basically works in a service like Twitter or Indenti.ca?

EP: Typically, you have a web service that serves as a hub for messages, and messages can go into this hub through many different media. They can be posted from a web page, but they could also be sent from an IM client, SMS or e-mail. They can be sent in through a web API, too. Twitter, for example, [has] a number of desktop clients that support their API.

Then each user in these networks can subscribe to messages from friends, from people they're interested in. So the messaging hub does this switching process of saying, "Of all these messages coming in, where do I send them out to?" The messaging hub sends them out to subscribers. These messages go out over RSS, web pages, IM, SMS or to API clients.

So, it's a "multiple channels in; multiple channels out" messaging system with a social networking aspect.

HW: To clarify, what's the difference between the archicture of a microblogging system and that of an instant messenging service?

EP: IM tends to not have a lot of persistence. You don't expect to be able to find a particular message or something that someone said in IM kept around on the web forever, and that's something that does happen with microblogging. We expect things to be persistent.

There's also an expectation of real-time conversation with IM. Whereas in microblogging, the conversation can happen over a period of days. So it's a more extended conversation.

Finally, most IM conversations are one-to-one. With microblogging, even a relatively antisocial person will usually end up with 50, 60, 100 people listening to them. It's common to get up into the triple digits fairly quickly, even for someone who's not looking to add a lot of friends.

One funny thing is that in IM if I'm in a conversation with you and another person, we can all three "hear" each other. But in microblogging, I might send out a notice, and you and the other person can both hear what I'm saying, but can't hear what each other are saying, because you're not subscribed to each other's messages. So there's a disconnect. The way notices propagate is a little bit more fractured across time and across a social network.

HW: Why did you create an open source version of the Twitter system?

EP: As someone who's very active in open source software and open source web software, I'm very interested in how much of our online life we are putting into the hands of services that are very proprietary. Some people call it the "roach motel model"—you put your data in and it won't come out. I put tons of information about myself and my life into services like Facebook. Twitter was another one that I was participating in quite a bit. But if anything happened with Twitter, if Twitter had reliability problems, or if Twitter wasn't on the web for a while, I couldn't take my data, put it into my instance of that software, and run it somewhere else. I find that frustrating. I think a lot of other people find that frustrating, too: giving up control of your social life and your sociality online to someone who doesn't necessarily have your best interests at heart.

It's not that the people who start Facebook or MySpace or Twitter are bad or anything. But losing one user's data, or dropping one user's set of messages is not a big deal to them, whereas it's a really big deal to me. So I wanted to experiment with providing web services where the user is in control.

I'm interested in doing this in different fields in the web services area. Microblogging just happens to be very popular. I wanted to get the idea in front of people who are interested in Twitter, and say maybe there's another way that we can do this.

HW: What language and libraries did you use to create Laconica, and why?

EP: Laconica is written with PHP and MySQL. We have off-line processing daemons that do a lot of the same kind of work that the Twitter off-line daemons do. They do routing and they do sending stuff out over different channels. We don't have a dedicated queuing server that's built into the system right now, but that will be in an upcoming version.

Right now, we just use our MySQL database as kind of an ad-hoc queuing server. It's not a very good way to do things and it kind of hurts our performance.

I think it's a little more unusual to do long-running processing, like queue handlers in PHP, than it is to do it with other scripting languages, like Ruby or Python or Pearl. However, once we made PHP work that way, it's been actually a pretty decent choice for implementation. And the fact that the web interface and the off-line daemons use the same language and libraries makes contributing to both a lot easier.



Comment and Contribute

 

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

 

Sitemap