Home Audience Developers A Primer On WSGI

A Primer On WSGI


Python-VisualThis article shows you how to make money and have fun with some code, Python and the Internet. It all starts with a four letter acronym, WSGI – a spec that makes it very easy to write applications for the Internet.

The WSGI spec is probably the simplest you will ever come across. It defines exactly one call interface that your application must implement in order to be a WSGI application. What’s more, it doesn’t have to be a function;it can be any callable.
The basic skeleton of a WSGI application is as follows:

That’s your app. It doesn’t do much. It needs all of three lines of code to implement a WSGI application. Of course, it can be quite a headache writing larger (read ‘real-life’) applications as a single function; so you can also write a WSGI application like the one shown below:

class WSGIApplication(object):
def __call__(self, environ, start_response):
start_response(“200 OK”, [(“Content-Type”, “text/plain”)]
return [“Hello World”]

By implementing a  __call__() method in a class, you can effectively turn an object into a function (actually, everything in Python is an object—even functions). The class needs to be instantiated into an object, but that object can now behave like a function too.

The arguments
The callable takes two arguments—environ and start_response(). The first one, environ, is a dictionary that holds the environment, which is basically a CGI environment with a few WSGI-specific keys added. The canonical way to parse a WSGI environ is to import the cgi module and use its FieldStorage class, as follows:

import cgi

# We will make a copy of the environ and strip out its query
# string, because apparently it's bad for FieldStorage

post_env = environ.copy()
post_env[“QUERY_STRING”] = “”

# And now we’ll parse it

post = cgi.FieldStorage(
fp = environ[“wsgi.input”],
environ = post_env,
keep_blank_values = True

The more interesting argument is the second one  start_response() which is a callable. Start_response() is used to set the status code and send out the headers. But why can’t that be done as part of returning the content? Well, it’s an ugly hack, but a very clever one that is immensely useful and no one has suggested a better way yet.

One of the aims of WSGI is to make the entire thing very loosely coupled, infinitely pluggable and layerable. With this kind of flexibility, one might think the spec would require a huge API. But the Python developers have managed to pack in this extreme flexibility with just the one call interface.

The start_response() call tells your server what response your app wants to send to the browser (or client). But the neat trick is, the status (and headers) are sent only after the app has returned, or has yielded at least one element to send to the client. This gives ample room to set up a 202 Accepted response and go on processing a particularly expensive operation, and change it to a 200 OK or a 400 series error code, depending on the outcome of the operation just before returning the data. But where this hack really shines is in WSGI middleware.

Middleware have some of the most interesting types of WSGI apps. These WSGI apps call other WSGI apps. Middleware can add elements to the request, or remove elements that are meant for the middleware to consume. As the response flows back from your app to the middleware and then on to the client, the middleware can analyse and change the response. Middleware solves pretty generic issues, like HTTP authentication, URL-based routing, sub domain-based routing, XSRF protection, and the like.
Middleware can be stacked infinitely. Because they are WSGI apps and look like WSGI compliant servers to the upstream apps, as many WSGI middleware as are needed can be chained together.

This is where start_response() really shines. The first middleware calls start_response() with 202 Accepted and calls the next one in the chain to handle the request. Suddenly, one middleware pretty high up in the chain decides to throw a 500 Internal Server Error. As this response travels back down the middleware chain, another middleware picks this up, turns it into a 503 Service Unavailable, and returns a pretty looking 503 page with the webmaster’s e-mail.

The return
WSGI’s return is even more interesting. An app (the callable) needs to return an iterable. The idea is that the app can return one chunk of data as and when it becomes available. The specification actually states that the headers are first sent out when the first non-empty string is yielded, so an error state can be achieved pretty late into the handling cycle. The only catch is that no non-empty strings should have been returned before changing a 200 OK into a 500 Internal Server Error.

There’s an excellent article on the Internet about the benefits of WSGIthe link is provided in the resource box at [1]. It should definitely be read before attempting to write your first WSGI application. It gives you a feel of what the developers were trying to achieve, and what you should and shouldn’t do with the spec.

Structuring big WSGI applications
The WSGI spec is quite flexible as there is only one entry point into your application. You can, therefore, write the app as a single function, a class implementing the __call__() method, or even as a complete module. When you get around to writing bigger WSGI applications (and this applies not only to pure WSGI applications, but also to micro frameworks that expose the application as a WSGI callable including Flask, Pylons, WebOb, and even Tornado in WSGI mode), you will generally want to write them as a module.

There are a couple of reasons for this. First, it’s always a good idea to write self-contained code. A well written WSGI application can be distributed using distribute, along with a requirements.txt file specifying dependencies for pip to download. In fact, because it’s so easy to get WSGI apps on the Internet for free (using Google App Engine or Heroku), and because these services generally mandate writing system independent modular code, it is a good idea to write code that can be run out-of-the-box on these services. Second, wouldn’t it be awesome to do the following:

from MySuperAwesomeWSGIApp import app

# Here’s its important to note exactly what app is
# app is your WSGI callable. Its definition and
# initialisation resides in __init__.py. Yes,
# initialisation. The app should be fully init-ed
# and ready to be served.

import OneMiddleware
import AnotherMiddleware
import ThirdMiddleware

app = OneMiddleware(app)
app = AnotherMiddleware(app)
app = ThirdMiddleware(app)

# Now hook up the app to the server - more on that later

Because WSGI apps can be called by another WSGI app (as is evident in how it is used in middleware), you can hook up your WSGI app (which you have thoughtfully written as a module) to other WSGI apps. But, to which ones? Well, there are URL-based routers, sub domain-based routers and authentication filters that are widely available online. You will find links to a few in the resource section.

Serving WSGI applications
Most people try serving WSGI applications using Apache and mod_wsgi. They fail miserably and complain that Python is slow or that WSGi is dead technology. What they don’t realise is that they are serving it wrong.
While it’s perfectly acceptable to make a WSGI app face the Web directly using mod_wsgi, it isn’t exactly advisable for performance and security considerations. Security wise, it’s a bad idea to run the Python interpreter inside your Web server’s process. You will generally want to serve your apps with a pure-Python server and put a nginx reverse-proxy in front. With that in mind, let’s examine a few WSGI servers and look at how to use them.

First is Python’s own wsgiref. This is the reference WSGI server implementation, which comes as a part of the Python standard library. It is thread-pooled, reasonably fast, and great for testing, development, or even running as a production server on your intranet (although for a particularly busy intranet that might be pushing it a bit). Use it as follows:

from wsgiref.simple_server import make_server
from MyWSGIApp import app

httpd = make_server('', 5000, app)

When you are ready to move up to the production level (i.e., on the real Internet), you will want to use a more robust server. There are a few options to choose from. You can go threadpooled, event-based, or use the current rage in Python—application level micro-threads that are co-operatively scheduled. Python calls them greenlets, and they are great for implementing asynchronous TCP servers, such as the one which comes with gevent. Gevent can be installed from the repos or from pip, and the server set up is as follows:

from gevent.wsgi import WSGIServer
from MyWSGIApp import app

httpd = WSGIServer(('', 5000), app)

Note that the host-port pair is passed as a tuple. Gevent’s greenlet-based server is currently regarded as the fastest Python implementation of the WSGI server, and because the server itself is written in C (it’s actually written using libev), it really is quite fast.
A different way of serving up a WSGI application is to use Green Unicorn, which is actually ported over from Ruby’s Unicorn project. It’s a pre-forking threadpooled server, but it can make use of gevent and greenlets. To use Green Unicorn, the gunicorn package is installed from pip or your package manager (ensure you install gevent because that is an optional dependency). Serving the application is a little different however Green Unicorn is a command line application, so you need to use the following command:

$: gunicorn -k gevent -w 4 -b MyWSGIApp:app

The -k gevent bit instructs gunicorn to use gevent and its greenlets, and the -w 4 tells it to use four worker threads (you should generally use no more than n+1 threads, where n is the number of cores your processor has –- which is somewhat like the number of threads you tell make to spawn with the -j option). The -b option supplies the host-port pair (this can be omitted) and, finally, you have the application itself—the colon notation basically says from MyWSGIApp import app and use that.
You should put a reverse-proxy in front of the application —something like the nginx config below will do just fine:

server {
listen 80;
server_name _;

location / {
proxy_redirect     off;

proxy_set_header   Host             $host;
proxy_set_header   X-Real-IP        $remote_addr;
proxy_set_header   X-Forwarded-For  $proxy_add_x_forwarded_for;

Ideally, all your static files should be served by nginx.
WSGI applications on Heroku
Heroku is apparently the next big thing in cloud technology. Its new Cedar stack makes it very easy to write WSGI applications and run them on the Web. With Heroku, you start with a virtualenv and develop your app like you generally would. When you’re ready to push the app into Heroku, in the root directory of the application, issue the following command:

(venv)$: pip freeze > requirements.txt

Heroku recognises a Python app by the requirements.txt file, and it specifies all the dependencies of your application. Optionally, you can write a runtime.txt in which you specify which Python interpreter to use. Your runtime.txt file should include the following line:


…where x.y.z is the Python version. Supported versions are 2.7.3 and 3.3.0 (2.7.3 is the default runtime), but you can specify any publicly available version from 2.4.4. You can even use pypy-1.9 if you want to. I couldn’t find a way to use Stackless Python or PyPy 2.0 beta without writing my own buildpack.
Remember that because Heroku’s Procfile expects a single command to start the server, and because there are no reverse-proxies between the app and the Web (save for Heroku’s routing mesh), your best bet is to use gunicorn.

WSGI applications on Google App Engine
Google App Engine defaults to using CGI for Python applications. To use WSGI, you must use the Python 2.7 runtime. Specify the following in your app.yaml file:

runtime: python27
api_version: 1
threadsafe: true

It’s important to specify the thread safety of your app. If you are not sure, say false’ —it doesn’t hurt.
Specify the handlers as shown below:

- url: /.*
script: MyWSGIApp.app

End notes
Because WSGI is so nifty, I generally stick to Web frameworks that expose their apps as WSGI callables. Flask is a great framework if you want to write WSGI applications. Pylons is a lot more involved. Django does have its own server but it can also act as a first-class WSGI app, so you are covered.
I will let you in on a secret. If you choose to use Green Unicorn and your app can handle static files, you don’t even need to use a reverse-proxy Green Unicorn is good enough to face the Web.

Maybe in another article, I will explore how to actually write a WSGI application with WebOb and, maybe, Flask. I won’t touch on writing raw WSGI applications in this article. It all hinges on demand, so if you want them, do pester OSFY with mails.

[1]     WSGI and the Pluggable Pipe Dream – http://lucumr.pocoo.org/2011/7/27/the-pluggable-pipedream/
[2]     Selector – A WSGI URL-based router – https://github.com/lukearno/selector/
[3]    A bunch of useful WSGI libraries and middleware – http://www.wsgi.org/en/latest/libraries.html



Please enter your comment!
Please enter your name here