Planet Plone

This is where developers and integrators write about Plone, and is your best source for news and developments from the community.

April 29, 2016

Asko Soukka: Embedding Robot Framework tests and keywords into Sphinx documentation

by Asko Soukka at 2016-04-29T16:47:39Z

Robot Frameworkships with decent tools for generating reference documentation out of your robot keywords and test data (see libdocand testdoc). Yet, when Timo Stollenwerk presented Robot Framework as part of his is talk about TDD at PloneConf 2012, the first question from the audience was, would you be able to include tests as examples into narrative documentation of your package.

I'm not sure, how much effort would it require to make Robot Framework support testable documentation (similarly to doctest-module), or would it even make any sense...

The other way around, however, is easy.

Introducing sphinxcontrib-robotdoc

Sphinx is the current state-of-art document generation tool of the Python community. Sphinx is based on Docutils, which makes it very easy to extend its reStructuredText-markup with custom directives.

There's also a real killer app for it: ReadTheDocs.

So, in the spirit autodoc extension for Sphinx, I wanted to use my sprint time at the PloneConf for starting up a new Sphinx extension for embedding Robot Framework tests and user keywords into narrative package documentation.

This work is now available as: sphinxcontrib-robotdoc.

And robot_-directives

sphinxcontrib-robotdoc introduces two new custom Docutils-directives to be used in Sphinx documentation:

  1. robot_tests and
  2. robot_keywords.

Both directives accept 1) optional regular expression filter and 2) mandatory source-option with a relative path to locate your Robot Framework test data or resource file. In addition, robot_tests-directive accepts also an 3) optional comma separated list of tags to select embedded tests from the parsed test data.

For example:

  1. Embed all tests from a test suite:

    .. robot_tests::
    :source: ../src/my_package/tests/acceptance/my_suite.txt
  2. Embed all tests starting with Log in from a test suite:

    .. robot_tests:: Log in.*
    :source: ../src/my_package/tests/acceptance/my_suite.txt
  3. Embed all tests tagged with login or logout from a test suite:

    .. robot_tests::
    :source: ../src/my_package/tests/acceptance/my_suite.txt
    :tags: login, logout
  4. Embed all user keywords from a test or a resource file:

    .. robot_keywords::
    :source: ../src/my_package/tests/acceptance/my_suite.txt
  5. Embed all user keywords starting with Log in from a test or a resource file:

    .. robot_keywords:: Log in.*
    :source: ../src/my_package/tests/acceptance/my_suite.txt

When test cases or user keywords contain documentation, it gets parsed with something called nested Docutils parser. This supports also links between keywords and links from narrative to keywords as long as both the link and its target are embedded onto the same Sphinx page.

Enabling for ReadTheDocs

If you are new to ReadTheDocs, you should start with their Getting Started-guide.

ReadTheDocs does support custom Sphinx-plugins (the ones that are not distributed with Sphinx's main distribution), but there are a few things to know about it:

  1. As usual, you must add the plugin into the extensions list of your Sphinx configuration (usually Also, remember to convert dashes in package names to underscores:

    extensions = ['sphinxcontrib_robotdoc']
  2. The required plugin must be published (probably at PyPi) like sphinxcontrib-robotdoc.

  3. You must edit your ReadTheDocs-project through their dashboard to Use virtualenv:

    Use virtualenv
    [x] Install your project inside a virtualenv using install
  4. Your package must include a pip requirements file requiring the Sphinx plugin (and the possibly required minimum version) you are using:

  5. The requirements file itself could be made specific for ReadTheDocs by placing it under a package subdirectory, e.g. ./docs/requirements.txt.

  6. Finally, your must edit your ReadTheDocs-project through their dasboard to find your requirement file:

    Requirements file:

Done. Now, the next ReadTheDocs-build for your documentation should be able to use your custom Sphinx-plugin, e.g. sphinxcontrib-robotdoc.

With a full example

At the PloneConf, I had a presentation with Jukka Ojaniemion doing AMQP based system integrations for Plone. For the presentation, I wrote a minimal publish-subscribe -example for Plonecontaining also a pair of acceptance tests written with Robot Framework.

Here goes my

and, finally, the results at ReadTheDocs.

And then what?

So, if you do acceptance driven development, shouldn't your acceptance criteria be good enough to be embedded as examples of your product's usage into its narrative documentation?

Actually, I don't want to argue more on that... I'll describe a real use case instead:

plone.act is the new acceptance test library for Plone and Plone add-on-developers. It is implemented as an importable resource of Robot Framework user keywords built on top of Robot Framework's built-in keywordsand Selenium2Library-keywords. Of course, it's still far from complete.

For plone.act, we do need to write a narrative tutorial-like documentation, including descriptions of the available keywords and examples of their use in custom test cases. The best way to do this and keep it in sync with the current implementation, I believe, would be to embed the actual keywords and tests cases into the documentation.

And, I hope, we can do that with sphinxcontrib-robotdocand enhance it a lot during the process.

Asko Soukka: Create custom views for Dexterity-types TTW

by Asko Soukka at 2016-04-29T16:47:22Z

Plone 4.3will ship with Dexterity, the new content type framework of Plone. One of the many great features in Dexterity is the schema editor, which makes it possible to create new content types completely through-the-web (TTW) -- no coding, no packaging, no buildout, no restarts.

But once you have the new types, you'll need to be able to write custom views for them. Dexterity was supposed to be shipped next to a thing called Deco Layout Editor, but because it's not yet here, there's no official way for defining custom views TTW.

Of course, because Plone adds the current content type name as a class name into HTML body tag, you can apply CSS and Diazo ( rules for the built-in default view.

With some old friends, however, you can get much further.

Disclaimer: This method has not been tested yet with Plone 4.3, but only with Plone 4.2.x and Dexterity 1.x -series.

Create a new content type

Creating a new Dexterity based content type is almost as easy as it could get. Well, once you have successfully included your buildout and started started your site with it.

  1. Activate Dexterity Content Types form Add-ons panel under Site Setup.

  2. Click Add New Content Type... on Dexterity Content Types -panel under Site Setup.

  3. Enter required details for the new type.
  4. Click Add to create it.

  5. Click your newly created type on Dexterity Content Types -panel.

  6. Add the fields your data requires.

Note, that every new type will be created with Dublin Core -behavior enabled. It means that every type will have the usual Plone metadata fields automatically (including title and description), and you only need to add your custom data fields.

While creating the new type, write down the following technical details:

  • Short Name for your content type (selected during the 3rd step).
  • Short Name for every custom field of your content type (created during the last step).

As soon as you have created a content type, it's addable from the Add new... -menu everywhere in your site.

Define a custom default view

Currently, defining a custom view for your content type (TTW) requires visiting some older parts of Plone:

  1. Enter to ZMI (Zope Management Interface) from Site Setup.

  2. Open a tool called portal_types.

  3. Scroll to the bottom of the displayed content type list.

  4. Open the link with the name of your new content type to open the type information form of your new type.

  5. Locate fields Default view method and Available view methods.

  6. Just below the default value view in Available view methods, enter a new line with a filename-like name of your new custom view, e.g. shortnameofmytype_view.

    It's important that no other type have used the same name before. Usually you are safe by prefixing the view name with the short nameof your type.

    (You can remove the default line view later to drop the option to show content with the built-in default view.)

  7. Replace the value of Default view method with the new view name you entered into Available view methods (e.g. shortnameofmytype_view).
  8. Save Changes at the bottom of the form.

With these steps you've told Plone to use a custom view of your own as the default view of your content type. But because that view doesn't really exist yet, Plone would raise an error when trying view a content of the new type, until a matching page template has been written.

Write a template for the view

To write a page template to work as you newly defined custom default view for you new content type, you have to re-enter ZMI:

  1. Enter to ZMI (Zope Management Interface) from Site Setup.

  2. Open a tool called portal_skins.

  3. Open a folder named custom.

  4. Select Page Template from the Add-list.

  5. Click Add (only if Add Page Template -form didn't open already).

  6. Enter the name of your view as the id of the new page template (e.g. shortnameofmytype_view).
  7. Click Add and Edit.

Now you should be able to:

  1. Enter a title for your new view (title may be visible for the content editors in content item's Display-menu).

  2. Type in a template for your view:

    <html xmlns="" xml:lang="en"

    <metal:css fill-slot="style_slot">
    <style type="text/css">
    <!-- Replace this with your views' custom CSS -->

    <metal:javascript fill-slot="javascript_head_slot">
    <script type="text/javascript">
    jQuery(function($) {
    // Replace this with your view's custom onLoad-jQuery-code.


    <metal:content-core fill-slot="content-core">
    <metal:content-core define-macro="content-core"
    tal:define="widgets nocall:context/@@view">

    <!-- Replace this with the HTML layouf of your custom view.

    The widgets-variable, which is defined above, gives you access
    to the field widgets of your custom fields through the built-in
    default view included in Dexterity (but only for the fields that
    are visible in the built-in default view, excluding e.g. widgets
    for Dublin Core metadata fields).

    It's crucial to use the available widgets for rendering
    RichText-fields, but widgets also do some special formatting for
    numeric fields, at least. In general, it's a good practice to
    use the widget for rendering the field value.

    You can render a field widget (e.g. for **Rich Text** -field or
    **File Upload** -field) with the following TAL-syntax:

    <div tal:replace="structure widgets/++widget++shortnameofmyfield/render">
    This will be replaced with the rendered content of the field.

    Widgets for fields of activated behaviors are prefixed with the
    interface of the behavior:

    <div tal:replace="structure widgets/++widget++IMyBehavior.fieldname/render">
    This will be replaced with the rendered content of the field.

    Images are best rendred with, like:

    <img tal:replace="structure context/@@images/shortnameofmyfield/thumb" />

    You can define the available sizes (e.g. **thumb**) in **Site

    Finally, you can always get and render values manually, like
    required for hidden Dublin Core -fields:

    <p>Last updated:
    <span tal:define="modification_date context/modification_date"




An example of a template

<html xmlns="" xml:lang="en"

<metal:css fill-slot="style_slot">
<style type="text/css">
.documentDescription { margin-bottom: 1em; }
#content-core img { float: left; margin: 0 1em 1em 0; }


<metal:content-core fill-slot="content-core">
<metal:content-core define-macro="content-core"
tal:define="widgets nocall:context/@@view">

<img tal:replace="structure context/@@images/portrait/thumb" />

<div tal:replace="structure widgets/++widget++contact_information/render">
Contact information.



Done, what next?

Workflows provides a sane TTW workflow editor for Plone.
Add forms
collective.pfg.dexterity provides a PloneFormGen-adapter for creating Dexterity-content from the form input (and therefore allowing anonymous users to submit new content).
Referenceability provides UID-linking support for Dexterity-types.
Membership management
dexterity.membrane allows Dexterity-types with fields first_name, last_name and email to define new users for a Plone site.
Versioning provides the familiar content versioning support for Dexterity-types.
collective.dexteritytextindexer allows to define, which fields should be indexed into SearchableText-index. Unfortunately, TTW configuration, while being possible, is not trivial and may require an another blog post...

Have fun!

by Asko Soukka at 2016-04-29T16:47:03Z

WAVE Web Accessibility Tool is a popular service for detecting accessibility issues on your websites. WAVE Toolbar is an offline version of the service, distributed as a downloadable, self-installable, Firefox add-on. Both the service and the toolbar are produced and copyrighted by WebAIM a US based non-profit organization, but are usable without cost.

During the last PLOGI was asked by Paul, if it would be possible to automate WAVE Toolbar -powered accessibility checks with Robot Framework (and its Selenium-library). It is.


WAVELibrary is a new Robot Framework library, packaged as robotframework-wavelibrary, to provide keywords for executing WAVE Toolbar accessibility analysis directly withing Robot Framework tests.

Together with the Selenium-library, it allows you to prepare any test situation on your web product (e.g. login and open some form), execute WAVE-analysis and either pass or fail the test according to the results. And those tests can also be integrated with your CI to avoid accidentally introducing new accessibility issues.

WAVELibrary is open source, so if its current features are not enough, thanks to Robot Framework syntax, you can easily contribute and make it better.

(Please, note that you should not solely rely only on WAVE Toolbar for validating your products accessibility, because accessibility should always be verified by a human. Yet, WAVE Toolbar could assist you on detecting possible accessibility issues, WAVELibrary could help in automating that, and once you are accessible, WAVE Toolbar and WAVELibrary together can help you to stay accessible.)

Basic usage


$ curl -O


parts = pybot

recipe = zc.recipe.egg
eggs =


*** Settings ***

Library WAVELibrary

Suite setup Open WAVE browser
Suite teardown Close all browsers

*** Test Cases ***

Test single page
Documentation] Single page test could interact with the target
app as much as required and end with triggering
the accessibility scan.
Go to
Check accessibility errors

Test multiple pages
Documentation] Template based test can, for example, take a list
of URLs and perform accessibility scan for all
of them. While regular test would stop for the
first failure, template based test will just jump
to the next URL (but all failures will be reported).
Template] Check URL for accessibility errors

See also all the available keywords. (in addition to robot keywordsand selenium keywords).


$ python
$ bin/buildout


$ bin/pybot example.robot

Test single page :: Single page test could interact with the target | PASS |
Test multiple pages :: Template based test can, for example, take ... | FAIL |
Wave reported errors for ERROR: Form label missing !=
Example | FAIL |
2 critical tests, 1 passed, 1 failed
2 tests total, 1 passed, 1 failed
Output: /.../output.xml
Log: /.../log.html
Report: /.../report.html

In addition to generated Robot Framework test report and log, there should be WAVE Toolbar -annotated screenshot of each tested page and WAVELibrary also tries to take a cropped screenshot of each visible accessibility error found on the page.

Plone usage

While WAVELibrary has no dependencies on Plone, it's tailored to work well with that it's easy to run accessibility test against sandboxed test instance.


$ curl -O


extends =
parts = robot

recipe = zc.recipe.egg
eggs =


*** Settings ***

Library WAVELibrary

Resource plone/app/robotframework/server.robot

Suite Setup Setup
Suite Teardown Teardown

*** Variables ***

START_URL} about:

*** Keywords ***

Setup Plone site
Import library Remote ${PLONE_URL}/RobotRemote
Enable autologin as Site Administrator
Set autologin username test-user-1

Teardown Plone Site

*** Test Cases ***

Test Plone forms
Template] Check URL for accessibility errors

Test new page form tabs
Template] Check new page tabs for accessibility errors

*** Keywords ***

Check new page tabs for accessibility errors
Arguments] ${fieldset}
Go to ${PLONE_URL}/createObject?type_name=Document
location} = Get location
Go to ${PLONE_URL}
Go to ${location}#fieldsetlegend-${fieldset}
Check accessibility errors


$ python
$ bin/buildout


$ bin/pybot plone.robot

One more thing...

What about recording those test runs?

  1. Get VirtualBox and Vagrant.

  2. Get and build my Robot Recorder kit:

    $ git clone git://
    $ cd robotrecorder_vagrant && vagrant up && cd ..
  3. Figure out your computers local IP...

  4. Record the previously described Plone-suite:

    $ ZSERVER_HOST=mycomputerip bin/pybot -v ZOPE_HOST:mycomputerip -v REMOTE_URL:http://localhost:4444/wd/hub plone.robot

Recordings are saved at ./robotrecorder_vagrant/recordings.

by Asko Soukka at 2016-04-29T16:46:49Z

This post may contain traces of legacy Zope2 and Python 2.x.

Some may think that Plone is bad in concurrency, because it's not common to deployt it with WSGI, but run it on top of a barely known last millennium asynchronous HTTP server called Medusa.

See, The out-of-the-box installation of Plone launches with only a single asynchronous HTTP server with just two fixed long-running worker threads. And it's way too easy to write custom code to keep those worker threads busy (for example, by with writing blocking calls to external services), effectively resulting denial of service for rest of the incoming requests

Well, as far as I know, the real bottleneck is not Medusa, but the way how ZODB database connections work. It seems that to optimize the database connection related caches, ZODB is best used with fixed amount of concurrent worker threads, and one dedicated database connection per thread. Finally, MVCC in ZODB limits each thread can serve only one request at time.

In practice, of course, Plone-sites use ZEO-clustering (and replication) to overcome the limitations described above.

Back to the topic (with a disclaimer). The methods described in this blog post have not been battle tested yet and they may turn out to be bad ideas. Still, it's been fun to figure out how our old asynchronous friend, Medusa, could be used to serve more concurrent request in certain special cases.

ZPublisher stream iterators

If you have been working with Plone long enough, you must have heard the rumor that blobs, which basically means files and images, are served from the filesystem in some special non-blocking way.

So, when someone downloads a file from Plone, the current worker thread only initiates the download and can then continue to serve the next request. The actually file is left to be served asynchronously by the main thread.

This is possible because of a ZPublisher feature called stream iterators(search IStreamIterator interface and its implementations in Zope2 and Stream iterators are basically a way to postpone I/O-bound operations into the main thread's asyncore loop through a special Medusa-level producer object.

And because stream iterators are consumed only within the main thread, they come with some very strict limitations:

  • they are executed only after a completed transaction so they cannot interact with the transaction anymore
  • they must not read from the ZODB (because their origin connection is either closed or in use of their origin worker thread)
  • they must not fail unexpectedly, because you don't want to crash the main thread
  • they must not block the main thread, for obvious reasons.

Because of these limitations, the stream iterators, as such, are usable only for the purpose they have been made for: streaming files or similar immediately available buffers.

Asynchronous stream iterators

What if you could use ZPublisher's stream iterator support also for CPU-bound post-processing tasks? Or for post-processing tasks requiring calls to external web services or command-line utilities?

If you have a local Plone instance running somewhere, you can add the following proof-of-concept code and its slow_ok-method into a new External Method(also available as a gist):

import StringIO
import threading

from zope.interface import implements
from ZPublisher.Iterators import IStreamIterator
from ZServer.PubCore.ZEvent import Wakeup

from zope.globalrequest import getRequest

class zhttp_channel_async_wrapper(object):
"""Medusa channel wrapper to defer producers until released"""

def __init__(self, channel):
# (executed within the current Zope worker thread)
self._channel = channel

self._mutex = threading.Lock()
self._deferred = []
self._released = False
self._content_length = 0

def _push(self, producer, send=1):
if (isinstance(producer, str)
and producer.startswith('HTTP/1.1 200 OK')):
# Fix Content-Length to match the real content length
# (an alternative would be to use chunked encoding)
producer = producer.replace(
'Content-Length: 0\r\n',
'Content-Length: {0:s}\r\n'.format(str(self._content_length))
self._channel.push(producer, send)

def push(self, producer, send=1):
# (executed within the current Zope worker thread)
with self._mutex:
if not self._released:
self._deferred.append((producer, send))
self._push(producer, send)

def release(self, content_length):
# (executed within the exclusive async thread)
self._content_length = content_length
with self._mutex:
for producer, send in self._deferred:
self._push(producer, send)
self._released = True
Wakeup() # wake up the asyncore loop to read our results

def __getattr__(self, key):
return getattr(self._channel, key)

class AsyncWorkerStreamIterator(StringIO.StringIO):
"""Stream iterator to publish the results of the given func"""


def __init__(self, func, response, streamsize=1 << 16):
# (executed within the current Zope worker thread)

# Init buffer
self._streamsize = streamsize

# Wrap the Medusa channel to wait for the func results
self._channel = response.stdout._channel
self._wrapped_channel = zhttp_channel_async_wrapper(self._channel)
response.stdout._channel = self._wrapped_channel

# Set content-length as required by ZPublisher
response.setHeader('content-length', '0')

# Fire the given func in a separate thread
self.thread = threading.Thread(target=func, args=(self.callback,))

def callback(self, data):
# (executed within the exclusive async thread)

def next(self):
# (executed within the main thread)
if not self.closed:
data =
if not data:
return data
raise StopIteration

def __len__(self):
return len(self.getvalue())

def slow_ok_worker(callback):
# (executed within the exclusive async thread)
import time

def slow_ok():
"""The publishable example method"""
# (executed within the current Zope worker thread)
request = getRequest()
return AsyncWorkerStreamIterator(slow_ok_worker, request.response)

The above code example simulates a trivial post-processing with time.sleep, but it should apply for anything from building a PDF from the extracted data to calling an external web service before returning the final response.

An out-of-the-box Plone instance can handle only two (2) concurrent calls to a method, which would take one (1) second to complete.

In the above code, however, the post-processing could be delegated to a completely new thread, freeing the Zope worker thread to continue to handle the next request. Because of that, we can get much much better concurrency:

$ ab -c 100 -n 100 http://localhost:8080/Plone/slow_ok
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd,
Licensed to The Apache Software Foundation,

Benchmarking localhost (be patient).....done

Server Software: Zope/(2.13.22,
Server Hostname: localhost
Server Port: 8080

Document Path: /Plone/slow_ok
Document Length: 2 bytes

Concurrency Level: 100
Time taken for tests: 1.364 seconds
Complete requests: 100
Failed requests: 0
Write errors: 0
Total transferred: 15400 bytes
HTML transferred: 200 bytes
Requests per second: 73.32 [#/sec] (mean)
Time per request: 1363.864 [ms] (mean)
Time per request: 13.639 [ms] (mean, across all concurrent requests)
Transfer rate: 11.03 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 1 2 0.6 2 3
Processing: 1012 1196 99.2 1202 1359
Waiting: 1011 1196 99.3 1202 1359
Total: 1015 1199 98.6 1204 1361

Percentage of the requests served within a certain time (ms)
50% 1204
66% 1256
75% 1283
80% 1301
90% 1331
95% 1350
98% 1357
99% 1361
100% 1361 (longest request)

Of course, most of the stream iterator limits still apply: Asynchronous stream iterator must not access the database, which limits the possible use cases a lot. For the same reasons, also plone.transformchain is effectively skipped (no Diazo or Blocks), which limits this to be usable only for non-HTML responses.


To go experimenting even further, what if you could do similar non-blocking asynchronous processing in the middle of a request? For example, to free the current Zope working thread while fetching a missing or outdated RSS feed in a separate thread and only then continue to render the final response.

An interesting side effect of using streaming iterators is that they allow you to inject code into the main thread's asynchronous loop. And when you are there, it's even possible to queue completely new request for ZPublisher to handle.

So, how would the following approach sound like:

  • let add-on code to annotate requests with promises for fetching the required data (each promise would be a standalone function, which could be executed under the asynchronous stream iterator rules, and when called, would resolve into a value, effectively the future of the promise), for example:

    def content(self):
    if 'my_unique_key' in IFutures(self.request):
    return IFutures(self.request)['my_unique_key']
    IPromises(self.request)['my_unique_key'] = my_promise_func
    return u''
  • when promises are found, the response is turned into an asynchronous stream iterator, which would then execute all the promises in parallel threads and collects the resolved values, futures:

    def transformIterable(self, result, encoding):
    if IPromises(self.request):
    return PromiseWorkerStreamIterator(
    IPromises(self.request), self.request, self.request.response)
    return None
  • finally, we'd wrap the current Medusa channel in a way that instead of publishing any data yet, a cloned request is queued for the ZPublisher (similarly how retries are done after conflict errors), but those cloned request and annotated to carry the resolved futures:

    def next(self):
    if self._futures:
    self._futures = {} # mark consumed to raise StopIteration

    from ZServer.PubCore import handle
    handle('Zope2', self._zrequest, self._zrequest.response)
    raise StopIteration
  • now the add-on code in question would find the futures from request, not issue any promises anymore and the request would result a normal response pushed all the way to the browser, which initiated the original request.

I'm not sure yet, how good or bad idea this would be, but I've been tinkering with a proof-of-concept implementation called experimental.promises to figure it out.

Of course, there are limits and issues to be aware of. Handling the same request twice is not free, which makes approach effective only when some significant processing can be moved to be done outside the worker threads. Also, because there may be other request between the first and the second pass (freeing the worker to handle other request is the whole point), the database may change between the passes (kind of breaking the MVCC promise). Finally, currently it's possible write the code always set new promises and end into never ending loop.

Anyway, if you are interested to try out these approaches (at your own risk, of course), feel free to ask more via Twitter or IRC.

by Asko Soukka at 2016-04-29T16:46:35Z

No more custom skins folder with infamous ploneCustom in Plone 5, they said.

Well, they can take away the skins folder, but they cannot take away our ploneCustom. I know, that the recommended way of customizing Plone 5 is via a custom theme through the Theming control panel from Site Setup. Still, sometimes you only need to add a few custom rules on top of an existing theme and creating a completely new theme would feel like an overkill.

Meet the new resource registry

One of the many big changes in Plone 5 is the completely new way how CSS and JavaScript resources are managed. Plone 5 introduces a completely new Resource Registries control panel and two new concepts to manage CSS ja JavaScipt there: resources and resource bundles.

Resource is a single CSS / LESS file, a single JavaScript file, or one of both, to provide some named feature for Plone 5. For example, a new embedded JavaScript based applet could be defined as a resource containing both its JavaScript code and required CSS /LESS stylesheet. In addition to those single files, JavaScript-files can depend on named requirejs modules provided by the other resources. Also LESS files can include any amount of available other LESS files. (LESS is superset of CSS with some optional superpowers like hierarchical directives, variables or optimized includes.)

Resource Bundle is a composition of named resources, which is eventually built into a single JavaScript and/or CSS file to be linked with each rendered page. When the page is rendered, bundles are linked (using either script-tags or stylesheet link-tags) in an order depending on their mutual dependencies. Bundles can be disabled and they can have conditions, so bundles are somewhat comparable to the legacy resource registry registrations in Plone 4 and earlier.

Now that you should be familiar with the concepts, you can bring our precious ploneCustom back to life.

Defining the next generation ploneCustom

These steps will define a new ploneCustom bundle, which provides both a custom CSS (with LESS) and a custom JavaScript file to allow arbitrary site customizations without introducing a new theme.

Creating and editing

At first, you need to add the actual LESS and JavaScript files. Instead of the deprecated skins custom folder you can add them into your Plone 5 site by using the old friend, ZMI (Zope Management Interface).

If you are running evelopment site, please, open the following url: https://localhost:8080/Plone/portal_resources/manage_main

This portal_resources is the new database (ZODB) based storage for any kind of custom resources (introduced with the new Theming control panel in Plone 4.3). Its functionality is based on plone.resource, but right now you only need to know how to use it with Plone 5 resource registries.

  1. So, in portal_resources, add a new BTreeFolder2 with name plone:
  2. Then navigate into that folder (select plone and press Edit button) and add an another BTreeFolder2 with name custom and navigate into that folder until you are at portal_resources/plone/custom:
  3. Now Add a new File named ploneCustom.js and another named ploneCustom.less:
  4. And, finally, you can navigate into those files (select and press Editbutton) to edit and save them with your CSS and JavaScript:

    The example JavaScript above would only annoy to to tell that it works:

    jQuery(function($) {
    alert("Hello World!");

    The example CSS above would replace the portal logo with a custom text:

    #portal-logo:before {
    display: inline-block;
    content: "My Plone Site";
    font-size: 300%;
    #portal-logo img {
    display: none;

    In addition to that, you could add a little bit extra to learn more. These following lines would re-use button classes from Bootstrap 3 resources shipped with with Plone 5 (beta). This is an example of how to use LESS to cherry pick just a few special CSS rules from Bootstrap 3 framework and apply them next to the currently active theme:

    @import (reference) "../++plone++static/components/bootstrap/less/bootstrap.less";
    #searchGadget_form .searchButton {

Registering and enabling

To register the resource and add it into a bundle (or create a new one), go to Resource Registries control panel (e.g. at http://localhost:8080/@@resourceregistry-controlpanel). Click Add resource to show the add resource form and fill it like in the screenshot below:

Note that the strings ++plone++custom/ploneCustom.js and ++plone++custom/ploneCustom.less are actually relative (public) URLs for the resources you just added into portal_resources.

After saving the resoure by clicking Save, click Add bundle to create a new bundle for your ploneCustom-resource. Fill-in the opened form as follows:

Note that the bundle depends on Plone bundle. That makes it getting loaded only after Plone bundle, which includes jQuery, which our custom JavaScript code depends on. (Later you may wonder, why jQuery was not required with requirejs. That would also work and is recommended for other libraries, but currently you can rely on jQuery being globally available after Plone bundle has been loaded.)

When you have saved the new ploneCustom resource bundle, it will appear into the Bundles list on the left. The final step is to click the Buildbutton below the ploneCustom bundle label in that list. That will open a popup model to overview the build progress.

Once the build is done, you can click Close and reload the page to see your new ploneCustom bundle being applied for your site:

Note how the Plone logo has been replaced with a custom text and the Search button has been style after Bootstrap 3 button styles. (Also, you should now have seen an annoying alert popup from your ploneCustom JavasScript.)

To modify your ploneCustom bundle, just go to edit the file and and return to Resource Registries control panel to click the Build button again.

Now you have your ploneCustom back in Plone 5. Congratulations!

P.S. Don't forget that you can also tweak (at least the default) Plone theme a lot from the Resource Registries control panel without ploneCustom bundle simply by changing theme's LESS variables and building Plone bundle.

EXTRA: TTW ReactJS App in Plone

The new Resource Registries may feel complex to begin with, but once you get used to them, they are blessing. Just define dependencies properly, and never again you need to order Plone CSS and JavaScript resources manually, and never again (well, once add-ons get update into this new configuration) should add-ons break your site by re-registering resources into broken order.

As an example, let's implement a ReactJS Hello World for Plone TTW using the new resource registry:

At first, you need to register ReactJS library as a resource. You could upload the library into portal_resources, but for a quick experiment you can also refer to a cloud hosted version ( So, go to Resource Registries control panel and Add resource with the following details:

Note how the library is defined to be wrapped for requirejs with name react013. (Plone 5 actually ships with ReactJS library, but because the version in the first beta is just 0.10, we need to add newer version with a version specific name.)

Next, go to portal_resources/plone/custom/manage_main as before and add a new file called reactApp.js with the following ReactJS Hello World as its contents:

], function(React) {

'use strict';

var ExampleApplication = React.createClass({
render: function() {
var elapsed = Math.round(this.props.elapsed / 100);
var seconds = elapsed / 10 + (elapsed % 10 ? '' : '.0' );
var message = 'React has been successfully running for ' + seconds + ' seconds.';
return React.createElement("p", null, message);

var start = new Date().getTime();

setInterval(function() {
React.createElement(ExampleApplication, {elapsed: new Date().getTime() - start}),
}, 50);

return ExampleApplication;


jQuery(function($) {

Note how ReactJS is required as react013, and how the example application is required as reactApp at the bottom (using jQuery onLoad convention).

Of course, also reactApp must be defined as a new resource at Resource Registries control panel. It should depend on previously added resource react013 being wrapped for requirejs and export itself for requirejs as reactApp:

Finally, you can Add bundle for this example reactApp:

And after Save, Build the bundle from the button below the new bundle name in Bundles list:

Note that, because the cloud hosted ReactJS library was used, the new bundle contains only the code from reactApp.js and requirejs will require ReactJS from the cloud on-demand. If you would have added the library into portal_resources, it would have been included in the resulting bundle.

After page reload, your ReactJS Hello World should be alive:

by Asko Soukka at 2016-04-29T16:39:42Z

Triggering asynchronous tasks from Plone is hard, we hear. And that's actually quite surprising, given that, from its very beginning, Plone has been running on top of the first asynchronous web server written in Python, medusa.

Of course, there exist many, too many, different solutions to run asynchronous task with Plone:

  • is the only one in Plone-namespace, and probably the most criticized one, because of using ZODB to persist its task queue
  • netsight.async on the other hand being simpler by just executing the the given task outside Zope worker pool (but requiring its own database connection).
  • finally, if you happen to like Celery, Nathan Van Gheem is working on a simple Celery-integration, collective.celery, based on an earlier work by David Glick.

To add insult to injury, I've ended up developing a more than one method more, because of, being warned about, being hit hard by the opinionated internals of Celery, being unaware of netsight.async, and because single solution has not fit all my use cases.

I believe, my various use cases can mostly be fit into these categories:

  • Executing simple tasks with unpredictable execution time so that the execution cannot block all of the valuable Zope worker threads serving HTTP requests (amount of threads is fixed in Zope, because ZODB connection cached cannot be shared between simultaneous requests and one can afford only so much server memory per site).

    Examples: communicating to external services, loading an external RSS feed, ...

  • Queueing a lot of background tasks to be executed now or later, because possible results can be delivered asynchronously (e.g. user can return to see it later, can get notified about finished tasks, etc), or when it would benefit to be able to distribute the work between multiple Zope worker instances.

    Examples: converting files, encoding videos, burning PDFs, sending a lot of emails, ...

  • Communicating with external services.

    Examples: integration between sites or different systems, synchronizing content between sites, performing migrations, ...

For further reading about all the possible issues when queing asynchronous tasks, I'd recommend Whichert Akkermans' blog post about task queues.

So, here's the summary, from my simpliest approach solution to enterprise messaging with RabbitMQ:

ZPublisher stream iterator workers

class MyView(BrowserView):

def __call__(self):
return AsyncWorkerStreamIterator(some_callable, self.request)

I've already blogged earlier in detail about how to abuse ZPublisher's stream iterator interface to free the current Zope worker thread and process the current response outside Zope worker threads before letting the response to continue its way towards the requesting client (browser).

An example of this trick is a yet another zip-export add-on collective.jazzport. It exports Plone-folders as zip-files by downloading all those to-be-zipped files separately simply through ZPublisher (or, actually, using site's public address). It can also download files in parallel to use all the available load balanced instances. Yet, because it downloads files only after freeing the current Zope worker instance, it should not block any worker thread by itself (see its, and

There are two major limitations for this approach (common to all ZPublisher stream iterators):

  • The code should not access ZODB after the worker thread has been freed (unless a completely new connection with new cache is created).
  • This does not help installations with HAProxy or similar front-end proxy with fixed allowed simultaneous requests per Zope instance.

Also, of course, this is not real async, because it keeps the client waiting until the request is completed and cannot distribute work between Zope instances.


class MyView(BrowserView):

def __call__(self):
return futures.result('my_unique_key')
except futures.FutureNotSubmittedError:
futures.submit('my_unique_key', some_callable, 'foo', 'bar')
return u'A placeholder value, which is never really returned.'

collective.futures was the next step from the previous approach. It provides a simple API for registering multiple tasks (which does not need to access ZODB) so that they will be executed outside the current Zope worker thread.

Once all the registered tasks have been executed, the same request will be queued for ZPublisher to be processed again, now with the responses from those registered tasks.

Finally, the response will be returned for the requesting like with any other requests.

collective.futures has the same issues as the previous approach (used in collective.jazzport), and it may also waste resources by processing certain parts of the request twice (like publish traverse).

We use this, for example, for loading external RSS feeds so that the Zope worker threads are freed to process other requests while we are waiting the external services to return us those feeds.


class MyView(BrowserView):

def __call__(self):
return u'Task queued, and a better view could now display a throbber.'

collective.taskqueue should be a real alternative for and netsight.async. I see it as a simple and opinionated sibling of collective.zamqp, and it should be able to handle all the most basic asynchrnous tasks where no other systems are involved.

collective.taskqueue provides one or more named asynchronously consumed task queues, which may contain any number of tasks: asynchronously dispatched simple requests to any traversable resources in Plone.

With out-of-the-box Plone (without any other add-ons or external services) it provides instance local volatile memory based task queues, which are consumed by the other one of the default two Zope worker threads. With redis, it supports persistent task queues with quaranteed delivery and distributed consumption. For example, you could have dedicated Plone instances to only consume those shared task queues from Redis.

To not sound too good to be true, collective.taskqueue does not have any nind of monitoring of the task queues out-of-the-box (only a instance-Z2.log entry with resulted status code for each consumed task is generated).


class MyView(BrowserView):

def __call__(self):
producer = getUtility(IProducer, name='my.asyncservice')
producer.register() # bind to successful transaction
producer.publish({'title': u'My title'})
return u'Task queued, and a better view could now display a throbber.'

Finally, collective.zamqp is a very flexible asynchronous framework and RabbitMQ integration for Plone, which I re-wrote from affinitic.zamqp before figuring out any of the previous approaches.

As the story behind it goes, we did use affinitic.zamqp at first, but because of its issues we had to start rewrite to make it more stable and compatible with newer AMQP specifications. At first, I tried to built it on top of Celery, then on top of Kombu (transport framework behind Celery), but at the end it had to be based directly on top of pika (0.9.4), a popular Python AMQP library. Otherwise it would have been really difficult to benefit from all the possible features of RabbitMQ and be compatible with other that Python based services.

collective.zamqp is best used for configuring and executing asynchronous messaging between Plone sites, other Plone sites and other AMQP-connected services. It's also possible to use it to build frontend messaging services (possibly secured using SSL) with RabbitMQ's webstomp server (see the chatbehavior-example). Yet, it has a few problems of its own:

  • it depends on five.grok
  • it's way too tighly integrated with pika 0.9.5, which makes upgrading the integration more difficult than necessary (and pika 0.9.5 has a few serious bugs related to synchronous AMQP connections, luckily not requird for c.zamqp)
  • it has a quite bit of poorly documented magic in how to use it to make all the possible AMQP messaging configurations.

collective.zamqp does not provide monitoring utilities of its own (beyond very detailed logging of messaging events). Yet, the basic monitoring needs can be covered with RabbitMQ's web and console UIs and RESTful APIs, and all decent monitoring tools should have their own RabbitMQ plugins.

For more detailed examples of collective.zamqp, please, see my related StackOverflow answer and our presentation from PloneConf 2012 (more examples are linked from the last slide).

by Asko Soukka at 2016-04-29T16:39:32Z

TL;DR; I forked collective.transmogrifier into just transmogrifier(not yet released) to make its core usable without Plone dependencies, use Chameleon for TAL-expressions, installable with just pip install and compatible with Python 3.

Transmogrifier is one of the many great developer tools by the Plonecommunity. It's a generic pipeline tool for data manipulation, configurable with plain text INI-files, while new re-usable pipeline section blueprints can be implemented and packaged in Python. It could be used to process any number of things, but historically it's been mainly developed and used as a pluggable way to import legacy content into Plone.

A simple transmogrifier pipeline for dumping news from Slashdot to a CSV file could look like:

pipeline =

blueprint = transmogrifier.from
modules = feedparser
expression = python:modules['feedparser'].parse(options['url']).get('entries', [])
url =

blueprint = transmogrifier.to_csv
fieldnames =

filename = slashdot.csv

Actually, in time of writing this, I've yet to do any Plone migrations using transmogrifier. But when we recently had a reasonable size non-Plone migration task, I knew not to re-invent the wheel, but to transmogrify it. And we succeeded. Transmogrifier pipeline helped us to design the migration better, and splitting data processing into multiple pipeline sections helped us to delegate the work between multiple developers.

Unfortunately, currently collective.transmogrifier has unnecessary dependencies on CMFCore, is not installable without long known good set of versions and is missing any built-int command-line interface. At first, I tried to do all the necessary refactoring inside collective.transmogrifier, but eventually a fork was required to make the transmogrifier core usable outside Plone-environments, be compatible with Python 3 and to not break any existing workflows depending on the old transmogrifier.

So, meet the new transmogrifier:

  • can be installed with pip install (although, not yet released at PyPI)
  • new mr.migrator inspired command-line interface (see transmogrif --help for all the options)
  • new base classes for custom blueprints
    • transmogrifier.blueprints.Blueprint
    • transmogrifier.blueprints.ConditionalBlueprint
  • new ZCML-directives for registering blueprints and re-usable pipelines
    • <transmogrifier:blueprint component="" name="" />
    • <transmogrifier:pipeline id="" name="" description="" configuration="" />
  • uses Chameleon for TAL-expressions (e.g. in ConditionalBlueprint)
  • has only a few generic built-in blueprints
  • supports z3c.autoinclude for package transmogrifier
  • fully backwards compatible with blueprints for collective.transmogrifier
  • runs with Python >= 2.6, including Python 3+

There's still much work to do before a real release (e.g. documenting and testing the new CLI-script and new built-in blueprints), but let's still see how it works already...

P.S. Please, use a clean Python virtualenv for these examples.

Example pipeline

Let's start with an easy installation

$ pip install git+
$ transmogrify --help
Usage: transmogrify <pipelines_and_overrides>...
transmogrify --list
transmogrify --show=<pipeline>

and with example filesystem pipeline.cfg

pipeline =

blueprint = transmogrifier.from
modules = feedparser
expression = python:modules['feedparser'].parse(options['url']).get('entries', [])
url =

blueprint = transmogrifier.to_csv
fieldnames =

filename = slashdot.csv

and its dependencies

$ pip install feedparser

and the results

$ transmogrify pipeline.cfg
INFO:transmogrifier:CSVConstructor:to_csv wrote 25 items to /.../slashdot.csv

using, for example, Python 2.7 or Python 3.4.

Minimal migration project

Let's create an example migration project with custom blueprints using Python 3. In addition to transmogrifier, we need venusianconfiguration for easy blueprint registration and, of course, actual depedencies for our blueprints:

$ pip install git+
$ pip install git+
$ pip install fake-factory

Now we can implement custom blueprints in, for example,

from venusianconfiguration import configure

from transmogrifier.blueprints import Blueprint
from faker import Faker

class FakerContacts(Blueprint):
def __iter__(self):
for item in self.previous:
yield item

amount = int(self.options.get('amount', '0'))
fake = Faker()

for i in range(amount):
yield {
'address': fake.address()

and see them registered next to the built-in ones (or from the other packages hooking into transmogrifier autoinclude entry-point):

$ transmogrify --list --include=blueprints

Available blueprints

Now, we can make an example pipeline.cfg

pipeline =

blueprint = faker_contacts
amount = 2

blueprint = transmogrifier.to_csv

and enjoy the results

$ transmogrify pipeline.cfg to_csv:filename=- --include=blueprints
"534 Hintz Inlet Apt. 804
Schneiderchester, MI 55300"
,Dr. Garland Wyman
"44608 Volkman Islands
Maryleefurt, AK 42163"
,Mrs. Franc Price DVM
INFO:transmogrifier:CSVConstructor:to_csv saved 2 items to -

An alternative would be to just use the shipped mr.bob-template...

Migration project using the template

The new transmogrifier ships with an easy getting started template for your custom migration project. To use the template, you need a Python environment with mr.bob and the new transmogrifier:

$ pip install mr.bob readline # readline is an implicit mr.bob dependency
$ pip install git+

Then you can create a new project directory with:

$ mrbob bobtemplates.transmogrifier:project

Once the new project directory is created, inside the directory, you can install rest of the depdendencies and activate the project with:

$ pip install -r requirements.txt
$ python develop

Now transmogrify knows your project's custom blueprints and pipelines:

$ transmogrify --list

Available blueprints

Available pipelines
Example: Generates uppercase mock addresses

And the example pipeline can be executed with:

$ transmogrify myprojectname_example
ISSAC KOSS I,"PSC 8465, BOX 1625
APO AE 97751"

TESS FAHEY,"PSC 7387, BOX 3736
APO AP 13098-6260"

INFO:transmogrifier:CSVConstructor:to_csv wrote 2 items to -

Please, see created README.rst for how to edit the example blueprints and pipelines and create more.

Mandatory example with Plone

Using the new transmogrifier with Plone should be as simply as adding it into your buildout.cfg next to the old transmogrifier packages:

extends =
parts = instance plonesite
versions = versions

extensions = mr.developer
soures = sources
auto-checkout = *

transmogrifier = git

recipe = plone.recipe.zope2instance
eggs =

user = admin:admin
zcml =

recipe = collective.recipe.plonesite
site-id = Plone
instance = instance

setuptools =
zc.buildout =

Let's also write a fictional migration pipeline, which would create Plone content from Slashdot RSS-feed:

pipeline =

blueprint = transmogrifier.from
modules = feedparser
expression = python:modules['feedparser'].parse(options['url']).get('entries', [])
url =

blueprint = transmogrifier.set
modules = uuid
id = python:str(modules['uuid'].uuid4())

blueprint = transmogrifier.set
portal_type = string:Document
text = path:item/summary
_path = string:slashdot/${item['id']}

blueprint = collective.transmogrifier.sections.folders

blueprint = collective.transmogrifier.sections.constructor

blueprint =

blueprint = transmogrifier.to_expression
modules = transaction
expression = python:modules['transaction'].commit()
mode = items

Now, the new CLI-script can be used together with bin/instance -Ositeid runprovided by plone.recipe.zope2instance so that transmogrifier will get your site as its context simply by calling zope.component.hooks.getSite:

$ bin/instance -OPlone run bin/transmogrify pipeline.cfg --context=zope.component.hooks.getSite

With Plone you should, of course, still use Python 2.7.

Funnelweb example with Plone

Funnelweb is a collection of transmogrifier blueprints an pipelines for scraping any web site into Plone. I heard that its example pipelines are a little outdated, but they make a nice demo anywyay.

Let's extend our previous Plone-example with the following funnelweb.cfgbuildout to include all the necessary transmogrifier blueprints and the example funnelweb.ttw pipeline:

extends = buildout.cfg

eggs +=

We also need a small additional pipeline commit.cfg to commit all the changes made by funnelweb.ttw:

pipeline = commit

blueprint = transmogrifier.interval
modules = transaction
expression = python:modules['transaction'].commit()

Now, after the buildout has been run, the following command would use pipelines funnelweb.ttw and commit.cfg to somewhat scrape my blog into Plone:

$ bin/instance -OPlone run bin/transmogrify funnelweb.ttw commit.cfg crawler:url= "crawler:ignore=feeds\ncsi.js" --context=zope.component.hooks.getSite

For tuning the import further, the used pipelines could be easily exported into filesystem, customized, and then executed similarly to commit.cfg:

$ bin/instance -OPlone run bin/transmogrify --show=funnelweb.ttw > myfunnelweb.cfg

by Asko Soukka at 2016-04-29T16:39:18Z

Updated 2015-02-15: I finally update to work with the latest Plone 5 releases.

When I recently wrote about, how to reintroduce ploneCustom for Plone5 TTW (through the web) by yourself, I got some feedback that it was the wrong thing to do. And the correct way would always be to create your custom theme.

If you are ready to let the precious ploneCustom go, here's how to currently customize the default Barceloneta theme on the fly by creating a new custom theme.

Inherit a new theme from Barceloneta

So, let's customize a brand new Plone 5 site by creating a new theme, which inherits everything from Barceloneta theme, yet allows us to add additional rules and styles:

  1. Open Site Setup and Theming control panel.

  2. Create New theme, not yet activated, with title mytheme(or your own title, once you get the concept)

  3. In the opened theme editor, replace the contents of index.htmlwith the following code, and Save the file after changes:

    <!doctype html>
    <title>Plone Theme</title>
    <section id="portal-toolbar">
    <div class="outer-wrapper">
    <header id="content-header" role="banner">
    <div class="container">
    <header id="portal-top">
    <div id="anonymous-actions">
    <div id="mainnavigation-wrapper">
    <div id="mainnavigation">
    <div id="above-content-wrapper">
    <div id="above-content">
    <div class="container">
    <div class="row">
    <aside id="global_statusmessage"></aside>
    <main id="main-container" class="row row-offcanvas row-offcanvas-right" role="main">
    <div id="column1-container">
    <div id="content-container">
    <div id="column2-container">
    </div> <!--/outer-wrapper -->
    <footer id="portal-footer-wrapper" role="contentinfo">
    <div class="container" id="portal-footer"></div>
  4. Then replace the contents of rules.xmlwith the following code, and Save the file after changes:

    <?xml version="1.0" encoding="UTF-8"?>

    <!-- Import Barceloneta rules -->
    <xi:include href="++theme++barceloneta/rules.xml" />

    <rules css:if-content="#visual-portal-wrapper">
    <!-- Placeholder for your own additional rules -->

  5. Still in the theme editor, Add new file with name styles.lessand edit and Save it with the following content:

    /* Import Barceloneta styles */
    @import "++theme++barceloneta/less/barceloneta.plone.less";

    /* Customize navbar color */
    @plone-sitenav-bg: pink;
    @plone-sitenav-link-hover-bg: darken(pink, 20%);

    /* Customize navbar text color */
    .plone-nav > li > a {
    color: @plone-text-color;

    /* Customize search button */
    #searchGadget_form .searchButton {
    /* Re-use mixin from Barceloneta */
    .button-variant(@plone-text-color, pink, @plone-gray-lighter);

    /* Inspect Barceloneta theme (and its less-folder) for more... */
  6. Finally, while you still have the styles.less open, you should be able to click Build CSS-button to build the currently oen LESS file into complete styles.css into your theme. (And you can use the same button to recompile your styles after any change or addition.)

    Note: Before Plone 5.0.2 you need to Add new file styles.cssbefore building the CSS. On 5.0.2 just clicking the build is enough.

But before activating the new theme, there's one more manual step to do: Add production-css setting into your theme's manifest.cfg to point to the compiled CSS bundle:

title = mytheme
description =
production-css = /++theme++mytheme/styles.css

Now you should be ready to return back to Theming control panel, activate the theme, and see the gorgeous pink navigation bar:

by Asko Soukka at 2016-04-29T16:39:00Z

The greatest blocker for using Nix or complex Python projects like Plone, I think, is the work needed to make all required Python-packages (usally very specific versions) available in nix expression. Also, in the most extreme, that would require every version for every package in PyPI in nixpkgs.

Announcing collective.recipe.nix

collective.recipe.nix is my try for generating nix expressions for arbitrary Python projects. It's an experimental buildout recipe, which re-uses zc.recipe.egg for figuring out all the required packages and their dependencies.

Example of usage

At first, bootstrap your environment by defining python with buildout in ./default.nix:

with import <nixpkgs> {}; {
myEnv = stdenv.mkDerivation {
name = "myEnv";
buildInputs = [
shellHook = ''
export SSL_CERT_FILE=~/.nix-profile/etc/ca-bundle.crt

And example ./buildout.cfg:

parts =

recipe = collective.recipe.nix
eggs = i18ndude

recipe = collective.recipe.nix
eggs = zest.releaser[recommended]

recipe = collective.recipe.nix
eggs = robotframework
propagated-build-inputs =

recipe = collective.recipe.nix
eggs = sphinx
propagated-build-inputs =

Run the buildout:

$ nix-shell --run buildout

The recipe generates three kind of expressions:

  • default [name].nix usable with nix-shell
  • buildEnv based [name]-env.nix usable with nix-build
  • buildPythonPackage based [name]-package.nix usable with nix-env -i -f

So, now you should be able to run zest.releaser with:

$ nix-shell releaser.nix --run fullrelease

You could also build Nix-environment with symlinks in folder ./releaser or into a Docker image with:

$ nix-build releaser-env.nix -o releaser

Finally, you could install zest.releaser into your current Nix-profile with:

$ nix-env -i -f releaser-zest_releaser.nix

by Asko Soukka at 2016-04-29T16:38:50Z

Nix makes it reasonable to build Docker containers from scratch. The resulting containers are still big (yet I heard there's ongoing work to make Nix builds more lean), but at least you don't need to think about choosing and keeping the base images up to date.

Next follows an example, how to make a Docker image for Plone with Nix.

Creating Nix expression with collective.recipe.nix

At first, we need Nix expression for Plone. Here I use one built with my buildout based generator, collective.recipe.nix. It generates a few exression, including plone.nix and plone-env.nix. The first one is only really usable with nix-shell, but the other one can be used building a standalone Plone for Docker image.

To create ./plone-env.nix, I need a buildout environment in ./default.nix:

with import <nixpkgs> {}; {
myEnv = stdenv.mkDerivation {
name = "myEnv";
buildInputs = [
shellHook = ''
export SSL_CERT_FILE=~/.nix-profile/etc/ca-bundle.crt

And a minimal Plone buildout using my recipe in ./buildout.cfg:

extends =
parts = plone
versions = versions

recipe = plone.recipe.zope2instance
eggs = Plone
user = admin:admin

recipe = collective.recipe.nix
eggs =

zc.buildout =
setuptools =

And finally produce both plone.nix and the required plone-env.nixwith:

$ nix-shell --run buildout

Creating Docker container with Nix Docker buildpack

Next up is building the container with our Nix expression with the help of a builder container, which I call Nix Docker buildpack.

At first, we need to clone that:

$ git clone
$ cd nix-build-pack-docker

And build the builder:

$ cd builder
$ docker build -t nix-build-pack --rm=true --force-rm=true --no-cache=true .
$ cd ..

Now the builder can be used to build a tarball, which only contains the built Nix derivation Plone. Let's copy the created plone-env.nix into the current working directory and run:

$ docker run --rm -v `pwd`:/opt nix-build-pack /opt/plone-env.nix

After a while, that directory should contain file called plone-env.nix.tar.gz, which only contains two directories in its root: /nix for the built derivation and /app for easy access symlinks, like /app/bin/python.

Now we need ./Dockerfile for building the final Plone image:

FROM scratch
ADD plone.env.nix.tar.gz /
USER 1000
ENTRYPOINT ["/app/bin/python"]

And finally, a Plone image can be built with

$ docker build -t plone --rm=true --force-rm=true --no-cache=true .

Running Nix-built Plone container

To run Plone in a container with the image built above, we still need the configuration for Plone. We can the normal buildout generated configuration, but we need to

  1. remove from parts/instance.
  2. fix paths to match in parts/instance/zope.conf to match the mounted paths in Docker container (/opt/...)
  3. create some temporary directory to be mounted into container

Also, we need a small wrapper to call the Plone instance script, ./, because we cannot use the buildout generated one:

import sys
import plone.recipe.zope2instance.ctl

['-C', '/opt/parts/instance/etc/zope.conf']
+ sys.argv[1:]

When these are in place, within the buildout directory, we should now be able to run Plone in Docker container with:

$ docker run --rm -v `pwd`:/opt -v `pwd`/tmp:/tmp -P plone /opt/ fg

The current working directory is mapped to /opt and some temporary directory is mapped to /tmp (because our image didn't contain even a /tmp).

Note: When I tried this out, for some reason (possibly because VirtualBox mount with boot2docker), I had to remove ./var/filestorage/Data.fs.tmp between runs or I got errors on ZODB writes.

by Asko Soukka at 2016-04-29T16:38:41Z

Some days ago there was a question at the Plone IRC-channel, whether the Plonetheming tool supports template inheritance [sic]. The answer is no, but let's play a bit with the problem.

The prefered theming solution for Plone,, is based on Diazo theming engine, which allows to make a Plone theme from any static HTML mockup. To simplify a bit, just get a static HTML design, write a set of Diazo transformation rules, and you'll have a new Plone theme.

The ideal behind this theming solution is to make the theming story for Plone the easiest in the CMS industry: Just buy a static HTML design and you could use it as a theme as such. (Of course, the complexity of the required Diazo transformation rules depends on the complexity of the theme and themed content.)

But back to the original problem: Diazo encourages the themer to use a plenty of different HTML mockups to keep the transformation rules simple. One should not try to generate theme elements for different page types in Diazo transformation rules, but use dedicated HTML mockups for different page types. But what if the original HTML design came only with a very few selected mockups, and creating the rest from those is up to you. You could either copy and paste, or...

Here comes a proof of concept script for generating HTML mockups from TALusing Chameleon template compiler (and Nix to remove need for virtualenv, because of Python dependencies).

But at first, why TAL? Because METAL macros of TAL can be used to make the existing static HTML mockups into re-usable macros/mixins with customizable slots with minimal effort.

For example, an existing HTML mockup:

Here be dragons.

Could be made into a re-usable TAL template (main_template.html) with:

<metal:master define-macro="master">
<div metal:define-slot="content">
Here be dragons.

And re-used in a new mockup with:

<html metal:use-macro="main_template.macros.master">
<div metal:fill-slot="content">
Thunderbirds are go!

Resulting a new compiled mockup:

Thunderbirds are go!

The script maps all direct sub-directories and files with .html suffix in the same directory with the compiled template into its TAL namespace, so that macros from those can be reached with METAL syntax metal:use-macro="filebasename.macros.macroname" or metal:use-macro="templatedirname['filebasename'].macros.macroname".

Finally, here comes the example code:

#! /usr/bin/env nix-shell
#! nix-shell -i python -p pythonPackages.chameleon pythonPackages.docopt pythonPackages.watchdog
"""Chameleon Composer

Copyright (c) 2015 Asko Soukka <>

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.


./ <filename>
./ src/front-page.html
./ <source> <destination> [--watch]
./ src build
./ src build --watch


from __future__ import print_function
from chameleon import PageTemplateFile
from chameleon import PageTemplateLoader
from docopt import docopt
from watchdog.observers import Observer
from watchdog.observers.polling import PollingObserver
from watchdog.utils import platform
import os
import sys
import time

def render(template):
assert os.path.isfile(template)

# Add siblings as templates into compilation context for macro-use
context = {}
dirname = os.path.dirname(template)
for name in os.listdir(dirname):
path = os.path.join(dirname, name)
basename, suffix = os.path.splitext(name)
if os.path.isdir(path):
context[basename] = PageTemplateLoader(path, '.html')
elif suffix == '.html':
context[basename] = PageTemplateFile(path)

return PageTemplateFile(template)(**context).strip()

class Composer(object):
def __init__(self, source, destination):
self.source = source
self.destination = destination
self.mapping = {}

def update(self):
source = self.source
destination = self.destination
mapping = {}

# File to file
if os.path.isfile(source) and os.path.splitext(destination)[-1]:
mapping[source] = destination

# File to directory
elif os.path.isfile(source) and not os.path.splitext(destination)[-1]:
mapping[source] = os.path.join(
os.path.splitext(os.path.basename(source))[0] + '.html'

# Directory to directory
elif os.path.isdir(source):
for filename in os.listdir(source):
path = os.path.join(source, filename)
if os.path.splitext(path)[-1] != '.html':
mapping[path] = os.path.join(
os.path.splitext(os.path.basename(path))[0] + '.html'

self.mapping = mapping

def __call__(self):
for source, destination in self.mapping.items():
if os.path.dirname(destination):
if not os.path.isdir(os.path.dirname(destination)):
with open(destination, 'w') as output:
print('{0:s} => {1:s}'.format(source, destination))

# noinspection PyUnusedLocal
def dispatch(self, event):
# TODO: Build only changed files

def watch(self):
if platform.is_darwin():
observer = PollingObserver() # Seen FSEventsObserver to segfault
observer = Observer()
observer.schedule(self, self.source, recursive=True)
while True:
except KeyboardInterrupt:

if __name__ == '__main__':
arguments = docopt(__doc__, version='Chameleon Composer 1.0')

if arguments.get('<filename>'):

composer = Composer(arguments.get('<source>'),

if arguments.get('--watch'):
print('Watching {0:s}'.format(arguments.get('<source>')))

by Asko Soukka at 2016-04-29T16:24:27Z

I just fixed my old post on customizing Plone 5 default theme on the flyto work with the final Plone 5.0 release.

But if you could not care less about TTW (through-the-web) theme development, here's something for you too: it is possible to build a theme for Plone 5 with all Plone 5's stylesheets and javascripts using Webpack – the current tool of choice for bundling web app frontent resources.

With Webpack, you can completely ignore Plone 5's TTW resource registry, and build your own optimal CSS and JS bundles with all the mockup patterns and other JS frameworks you need - with live preview during development.

To try it out, take a look at my WIP example theme at:


  • Ship your theme with Webpack-optimized resource chunks automatically split into synchronous and asynchronously required resources.
  • Get faster-than-reload live previews of your changes during development thanks to Webpack's development server's hot module replacement support.
  • Get complete control of Plone 5 frontend resources and completely bypass Plone 5 TTW resource registry (it's awesome for TTW workflow, but not optimal for thefilesystem one).
  • Use the latest JS development tools (Webpack integrates nicely with Babel, ESLint and others) without need for legacy Bower, Grunt, Gulp or RequireJS.


  • Installing a new Plone add-on requires configuring and building add-on's resources into theme.
  • You are on your own now, because you no longer get JS / CSS updates with new Python package releases, but you always need to also re-build your theme.

April 28, 2016

Asko Soukka: Building a Plone form widget with React + Redux

by Asko Soukka at 2016-04-28T19:58:48Z

As much I love the new through-the-web resource registries in Plone 5 (I really do), for the current Plone 5 sites in development or already in production, I've ended up bundling all front-end resources into theme with Webpack.That gives me the same "state of art" frontend toolchain to other current projects, but also adds some overhead, because I need to do extra work for each new add-on with front-end resources. So, I still cannot really recommend Webpack for Plone, unless you are already familiar with Webpack. Yet, learning to bundle everything with Webpack really helps to appreciate, how well Plone 5 resource registries already work.

My current workflow, in brief, is to add all common configration into plonetheme.webpack and re-use that as a git submodule in individual projects, similarly to plonetheme.webpackexample. The latter also includes the example code for this post. I was asked, how everything goes together when using React and Redux for building widgets for Plone. Here's how...

(You can see the complete example in plonetheme.webpackexample, particuarly in 1 and 2.)

Injecting a pattern with Diazo

In a usual use case, I have a custom content type (maybe TTW designed) with simple textline or lines (textarea) fields, which require rich JavaScript widgets to ease entering of valid input.

The current Plone convention for such widgets is to implement the widget as a Patternslib compatible pattern. The required classname (and options) for the pattern initialization could, of course, be injected by registering a custom z3c.form widget for the field, but it can also be done with a relatively simple Diazo rule with some XSLT:

<!-- Inject license selector pattern -->
<replace css:content="textarea#form-widgets-IDublinCore-rights">
<xsl:attribute name="class">
<xsl:value-of select="concat(@class, ' pat-license-selector')" />
<xsl:apply-templates select="@*[name()!='class']|node()" />

Registering a pattern in ES6

Of course, you cannot yet use ES6 in Plone without figuring out a way to way to transpile it into JavaScript currently supported by your target browsers and RequireJS (that something, which comes quite easily with Webpack). If you can do it, registering a Patternslib compatible pattern in ES6 appeared to be really simple:

import Registry from 'patternslib/core/registry';

// ... (imports for other requirements)


name: 'license-selector',
trigger: '.pat-license-selector',

init ($el, options) {
// ... (pattern code)

Choosing React + Redux for widgets

You must have already heard about the greatest benefits in using React as a view rendering library: simple unidirectional data flow with stateless views and pretty fast rendering with "shadow DOM" based optimization. While there are many alternatives for React now, it probably has the best ecosystem, and React Lite-like optimized implementations, make it small enough to be embeddable anywhere.

Redux, while technically independent from React, helps to enforce the React ideals of predictable stateless views in your React app. In my use case of building widgets for individual input fields, it feels optimal because of its "single data store model": It's simple to both serialize the widget value (Redux store state) into a single input field and de-serialize it later from the field for editing.

Single file React + Redux skeleton

Even that Redux is very small library with simple conventions, it seems to be hard to find an easy example for using it. That's because most of the examples seem to assume that you are building a large scale app with them. Yet, with a single widget, it would be nice to have all the required parts close to each other in a single file.

As an example, I implemented a simple Creative Commons license selector widget, which includes all the required parts of React + Redux based widget in a single file (including Patternslib initialization):

import React from 'react';
import ReactDOM from 'react-dom';
import {createStore, compose} from 'redux'
import Registry from 'patternslib/core/registry';

// ... (all the required imports)
// ... (all repeating marker values as constants)

function deserialize(value) {
// ... (deserialize value from field into initial Redux store state)

function serialize(state) {
// ... (serialize value Redux store state into input field value)

function reducer(state={}, action) {
// ... ("reducer" to apply action to state and return new state)

export default class LicenseSelector extends React.Component {
render() {
// ...

LicenseSelector.propTypes = {
// ...

// ... (all the required React components with property annotations)

name: 'license-selector',
trigger: '.pat-license-selector',

init ($el) {
// Get form input element and hide it
const el = $el.hide().get(0)

// Define Redux store and initialize it from the field value
const store = createStore(reducer, deserialize($el.val()));

// Create container for the widget
const container = document.createElement('div');
el.parentNode.insertBefore(container, el);
container.className = 'license-selector';

// Define main render
function render() {
// Serialize current widget value back into input field

// Render widget with current state
// Pass state
// Pass Redux action factories
setSharing={(value) => store.dispatch({
value: value
setCommercial={(value) => store.dispatch({
value: value
), container);

// Subscribe to render when state changes

// Call initial render

Not too complex, after all...

Implementing and injecting a display widget as a themefragment

Usually displaying value from a custom field requires more HTML that's convenient to inline into Diazo rules, and may also require data, which is not rendered by the default Dexterity views. My convention for implementing these "display widgets" in theme is the following combination of theme fragments and Diazo rules.

At first, I define a theme fragment. Theme fragments are simple TAL templates saved in ./fragments folder inside a theme, and are supported by installing collective.themefragments add-on. My example theme has the following fragment at ./fragments/

<html xmlns=""
<p tal:condition="context/rights|undefined">
<img src="${context/rights}/4.0/88x31.png"
alt="${context/rights}" />

Finally, the fragment is injected into desired place using Diazo. In my example, I use Diazo inline XSLT to append the fragment into below content viewlets' container:

<!-- Inject license badge below content body -->
<replace css:content="#viewlet-below-content-body"
<xsl:apply-templates select="@*|node()" />
<xsl:copy-of select="document('@@theme-fragment/license',

April 27, 2016

Andreas Jung: Plone 5 as foundation for XML Director Web-to-Print solutions (continued)


XML Director is an generic solution for building XML-based content management solutions based on the CMS Plone. This video shows how we build easy-to-use web-to-print applications using CSS Paged Media (XML/HTML for input, CSS for layout and styling). This demo features the PDFreactor PDF converter and the Nimbudocs WYSIWYG editor by RealObjects. Plone 5 as foundation for XML Director Web-to-Print solutions

April 26, 2016

Martijn Faassen: Morepath 0.14 released!

by Martijn Faassen at 2016-04-26T14:34:17Z

Today we released Morepath 0.14 (CHANGES).

What is Morepath? Morepath is a Python web framework that is powerful and flexible due to its advanced configuration engine (Dectate) and an advanced dispatch system (Reg), but at the same time is easy to learn. It's also extensively documented!

The part of this release that I'm the most excited about is not technical but has to do with the community, which is growing -- this release contains significant work by several others than myself. Thanks Stefano Taschini, Denis Krienbühl and Henri Hulski!

New for the community as well is that we have a web-based and mobile-supported chat channel for Morepath. You can join us with a click.

Please join and hang out!

Major new features of this release:

  • Documented extension API
  • New implementation overview.
  • A new document describing how to test your Morepath-based code.
  • Documented how to create a command-line query tool for Morepath configuration.
  • New cookiecutter template to quickly create a Morepath-based project.
  • New releases of various extensions compatible with 0.14. Did you know that Morepath has more.jwtauth, more.basicauth and more.itsdangerous extensions for authentication policy, more.static and more.webassets for static resources, more.chameleon and more.jinja2 for server templating languages, more.transaction to support SQLAlchemy and ZODB transactions and more.forwarded to support the Forwarded HTTP header?
  • Configuration of Morepath-based applications is now simpler and more explicit; we have a new commit method on application classes and applications get automatically committed during runtime if you don't do it first.
  • Morepath now performs host header validation to guard against header poisoning attacks.
  • New defer_class_links directive. This helps in a complicated app that is composed of multiple smaller applications that want to link to each other using the request.class_link method introduced in Morepath 0.13.
  • We've refactored both the publishing/view system and the link generation system. It's cleaner now under the hood.
  • Introduced an official deprecation policy as we prepare for Morepath 1.0, along with upgrade instructions.

Interested? Feedback? Let us know!