Warning: Cannot modify header information - headers already sent by (output started at /var/www/appfun.io/dieselweb.org/wp-content/themes/popularfx/inc/assets/playlist-music.png:1337) in /var/www/appfun.io/dieselweb.org/wp-includes/rest-api/class-wp-rest-server.php on line 1893
{"id":23,"date":"2024-03-12T18:08:50","date_gmt":"2024-03-12T18:08:50","guid":{"rendered":"http:\/\/www.dieselweb.org\/?page_id=23"},"modified":"2024-06-04T12:20:55","modified_gmt":"2024-06-04T12:20:55","slug":"dieselweb-org","status":"publish","type":"page","link":"https:\/\/dieselweb.org\/","title":{"rendered":"DieselWeb.org <\/>"},"content":{"rendered":"
\n
\"\"<\/figure><\/div>\n\n\n

The first iteration of the dieselweb.org website was created in 2009 for programmers. The domain has seen several iterations by other owners. We are providing 2009-2010 archived content from the original site.<\/p>\n\n\n\n

Meet diesel<\/em><\/h2>\n\n\n\n

diesel is a framework for writing network applications using asynchronous I\/O in Python.<\/p>\n\n\n\n

It uses Python’s generators to provide a friendly syntax for coroutines and continuations. It performs well and handles high concurrency with ease.<\/p>\n\n\n\n

An HTTP\/1.1 implementation is included as an example, which can be used for building web applications.<\/p>\n\n\n\n

Here’s how easy it is to write a simple echo server:<\/strong><\/p>\n\n\n\n

from diesel import Application, Service, until_eol\n\ndef echo(remote_addr):\n\ntheir_message = yield until_eol()\n\nyield \"you said: %s\\r\\n\" % their_message.strip()\n\napp = Application()\n\napp.add_service(Service(echo, 7050))\n\napp.run()<\/code><\/pre>\n\n\n\n

You can find a nice overview of the nitty-gritty details in the documentation.<\/em><\/p>\n\n\n\n

Why diesel?<\/h2>\n\n\n\n

Currently, we are developing diesel primarily to meet the requirements of ShopTalk, a web-based group chat application for companies. However, we have been using the library for several years now on other projects. It has been tested with many applications, both HTTP and otherwise. We’ve found that it makes writing asynchronous applications a breeze.<\/p>\n\n\n\n

We are releasing diesel as open source now, because we sense that the community is becoming more interested in asynchronous applications due to the rise in popularity of Comet. The open source community can benefit from diesel’s accessible API, and diesel can benefit from the testing and contributions of the community.<\/p>\n\n\n\n

Community<\/h2>\n\n\n\n

We will be actively answering questions on the mailing list. Feel free to join in the discussion and send us your questions and comments.<\/p>\n\n\n\n

The diesel source is hosted on bitbucket. Head over there to grab the source and fork the repository. The source repository also contains a copy of the documentation.<\/p>\n\n\n\n

The Future<\/h2>\n\n\n\n

The underlying asynchronous library is only one piece of the puzzle. Once this foundation is in place, it becomes much easier to build more useful asynchronous software. In fact, we already have.<\/p>\n\n\n\n

We will be releasing the other components of our stack as they become more mature. You can follow us on Twitter if you’d like to be kept apprised of future diesel releases.<\/p>\n\n\n\n

Background<\/h1>\n\n\n\n

diesel is a framework for writing network applications using asynchronous I\/O.<\/p>\n\n\n\n

What is Asynchronous I\/O?<\/h2>\n\n\n\n

The basic decision network applications need to make when it comes to concurrency is what to do about waiting on data to arrive or to be ready to be written when multiple connections are involved. The problem can be best explained using the recv() syscall. recv() is the way that most network applications retrieve data off a socket; it is passed a socket file descriptor, and it blocks until data is available–then the data is returned.<\/p>\n\n\n\n

With more than one socket, however, problems arise if you ignore the needs of concurrency. recv() is socket-specific, so your entire program blocks waiting on data to arrive on socket A. Meanwhile, sockets B, C, and D all have data waiting to be processed by the application. Oh well.<\/p>\n\n\n\n

Many applications solve this by using multiple worker threads (or processes), which are passed a socket from a central, dispatching thread. Each worker thread “owns” exactly one socket at a time, so it can feel free to call recv(socket) and wait whenever appropriate. The operating system’s scheduler will run other threads for processing data on other sockets until the original thread has data ready.<\/p>\n\n\n\n

Asynchronous I\/O takes a different approach. Typically, some system call is invoked that blocks on many<\/em> sockets at the same time, and which returns information about any file descriptors that are ready for reading or writingright now<\/em>. This allows the program to know that, for particular sockets, data is on the input buffer and recv() will return immediately. In these applications, since (theoretically) nothing will ever block on a I\/O syscall for an individual socket, only one thread is necessary.<\/p>\n\n\n\n

Advantages of Asynchronous I\/O<\/h2>\n\n\n\n

The memory overhead of each connection is typically much lower than the other approaches, which makes it ideally suited to situations where socket concurrency numbers into the hundreds or thousands. No fork() or spawn()needs to be invoked to handle a surge of connections, and no thread pool management needs to take place. Additionally, switching between activity on sockets doesn’t involve the operating system scheduler getting involved and a context switch between threads. All these factors add up: daemons written using Asynchronous I\/O are typically the definitive performance champions in their category.<\/p>\n\n\n\n

Also, because there is often only one thread running in an application, no complex and expensive locking needs to occur on shared data structures; a routine executing against shared data is guaranteed not to be interrupted in the middle of a series of operations.<\/p>\n\n\n\n

Disadvantages of Asynchronous I\/O<\/h2>\n\n\n\n

The inertia of existing code and developer preferences is a challenge. Most well-known client libraries block, so you can pretty much toss them out the window (or sandbox them on a thread, killing most of the aforementioned advantages). And blocking style “feels” more natural and intuitive to most programmers than async does. It’s usually perceived as easier to write, and especially, read.<\/p>\n\n\n\n

Threaded or multi-processed approaches are also better poised to take advantage of multiple cores. You’re already involving the OS in scheduling, and you’re already (hopefully) locking your shared data correctly, so running your programs “automatically” across multiple cores is possible. There are ways to do this with async approaches, of course, but they’re arguably more explicit.<\/p>\n\n\n\n

Finally, handlers within async applications must be good neighbors, and give up control back to the main loop within a reasonable amount of time, or else they can block all other processing from occurring. CPU-intensive operations devoted to individual sockets can be problematic.<\/p>\n\n\n\n

Installation<\/h1>\n\n\n\n

Prerequisites<\/h2>\n\n\n\n

In writing diesel, we had to choose between a difficult installation process (pyevent\/libevent) or supporting only a specific, but common, platform. We chose the latter.<\/p>\n\n\n\n

Currently, diesel is built on Python 2.6’s epoll support in the standard library. This means that diesel requires Python 2.6 running on a Linux system. We aren’t opposed to adding support for more systems in the future, but right now, that’s exactly what you need.<\/p>\n\n\n\n

The good news is, it doesn’t require anything other than the standard library.<\/p>\n\n\n\n

Installation<\/h2>\n\n\n\n

Provided you have setuptools installed, you can install using the standard python-cheeseshop route:<\/p>\n\n\n\n

easy_install -UZ diesel<\/code><\/pre>\n\n\n\n

Examples and Docs<\/strong><\/p>\n\n\n\n

We do recommend you get the source, however, which contains lots of useful examples and a copy of this documentation.<\/p>\n\n\n\n

The latest source and docs can always be downloaded from bitbucket: http:\/\/bitbucket.org\/boomplex\/diesel\/<\/p>\n\n\n\n

Fundamentals<\/h1>\n\n\n\n

So diesel does network applications async-style. That we’ve covered.<\/p>\n\n\n\n

What’s unusual (and we think, awesome) about it is its preservation of the “blocking” feel of synchronous applications by (ab)use of Python’s generators.<\/p>\n\n\n\n

How does it Work?<\/h2>\n\n\n\n

Let’s dive in…<\/p>\n\n\n\n

Every “thread” of execution is managed by a generator. These generators are expected to yield special tokens that diesel knows how to process. Let’s take a look at a simple generator that uses the sleep token:<\/p>\n\n\n\n

def print_every_second():\n\nwhile True:\n\nprint \"hi!\"\n\nyield sleep(1.0)\n\ndef print_every_two_seconds():\n\nwhile True:\n\nprint \"hi!\"\n\nyield sleep(2.0)\n<\/code><\/pre>\n\n\n\n

Let’s imagine that both these loops were run at the same time within diesel; here’s an examination of what would go on from diesel’s perspective:<\/p>\n\n\n\n

\n
    \n
  1. print_every_second() is scheduled<\/li>\n\n\n\n
  2. A sleep token is yielded, requesting a wakeup in 1 second<\/li>\n\n\n\n
  3. A one second timer is registered with the diesel event hub<\/li>\n\n\n\n
  4. Are there any other loops to run? Yes, so:<\/li>\n\n\n\n
  5. print_every_two_seconds() is scheduled<\/li>\n\n\n\n
  6. A sleep token is yielded, requesting a wakeup in 2 seconds<\/li>\n\n\n\n
  7. A two second timer is registered with the diesel event hub<\/li>\n\n\n\n
  8. Are there any other loops to run? No, so:<\/li>\n\n\n\n
  9. The main event hub loop waits until the timer that fires the soonest is ready (1s)<\/li>\n\n\n\n
  10. Timers are processed to see what needs to be scheduled<\/li>\n\n\n\n
  11. Run any scheduled loops… and so on.<\/li>\n<\/ol>\n<\/blockquote>\n\n\n\n

    Take a minute to recognize what’s going on here: we’re running cooperative loops that appear to be using easily-read, blocking, threaded behavior–but they’re actually running within one process!<\/p>\n\n\n\n

    Hopefully that provides a sense of what diesel is doing and how generators can turn async into blocking-ish routines. Now let’s put internals aside and focus on how to use<\/em> diesel.<\/p>\n\n\n\n

    Boilerplate<\/h2>\n\n\n\n

    The truth is the example above wasn’t a full diesel application; here’s what a runnable version would look like:<\/p>\n\n\n\n

    from diesel import Application, Loop, sleep\n\ndef print_every_second():\n\nwhile True:\n\nprint \"hi!\"\n\nyield sleep(1.0)\n\ndef print_every_two_seconds():\n\nwhile True:\n\nprint \"hi!\"\n\nyield sleep(2.0)\n\napp = Application()\n\napp.add_loop(Loop(print_every_second))\n\napp.add_loop(Loop(print_every_two_seconds))\n\napp.run()<\/code><\/pre>\n\n\n\n

    Still, not too bad.<\/p>\n\n\n\n

    Every diesel app has exactly one Application instance. This class represents the main event hub as well as all the Loop and Service instances it schedules.<\/p>\n\n\n\n

    Loops and Services<\/h2>\n\n\n\n
    \n