Planet Plone

This is where developers and integrators write about Plone, and is your best source for news and developments from the community.

July 06, 2015

Martijn Faassen: Thoughts about React Europe

by Martijn Faassen at 2015-07-06T15:40:00Z

Last week I visited the React Europe conference in Paris; it was the first such event in Europe and the second React conference in the world. Paris like much of the rest of western Europe during this early July was insanely hot. The airconditioning at the conference location had trouble keeping up, and bars and restaurants were more like saunas. Nonetheless, much was learned and much fun was had. I'm glad I went!

React, in case you're not aware, is a frontend JavaScript framework. There are a lot of those. I wrote one myself years ago (before it was cool; I'm a frontend framework hipster) called Obviel. React appeals to me because it's component driven and because it makes so many complexities go away with its approach to state.

Another reason I really like React is because its community is so creative. I missed being involved with such a creative community after my exit from Zope, which was due in large part as it had become less creative. A lot of the web is being rethought by the React community. Whether all of those ideas are good remains to be seen, but it's certainly exciting and good will come from it.

Here is a summary of my experiences at the conference, with some suggestions for the conference organizers sprinkled in. They did a great job, but this way I hope to help them make it even better.

Hackathon

When I heard there would be a hackathon the day before the conference, I immediately signed up. This would be a great way to meet other developers in the React community, work on some open source infrastructure software together, and learn from them. Then a few days before travel I learned there was a contest and prizes. Contest? Prizes? I was somewhat taken aback!

I come from the tradition of sprints in the Python world. Sprints in the Python community originated with the Zope project, and go back to 2001. Sprints can be 1-3 day affairs held either before or after a Python conference. The dedicated sprint is also not uncommon: interested developers gather together somewhere for a few days, sometimes quite a few days, to work on stuff together. This can be a small group of 4 people in someone's offices, or 10 people in a converted barn in the woods, or 30 people in a castle, or even more people in a hotel on a snowy alpine mountain in the winter. I've experienced all of that and more.

What do people do at such sprints? People hack on open source infrastructure together. Beginners are onboarded into projects by more experienced developers. New projects get started. People discuss new approaches. They get to know each other, and learn from each other. They party. Some sprints have more central coordination than others. A sprint is a great way to get to know other developers and do open source together in a room instead of over the internet.

I previously thought the word hackathon to be another word for sprint. But a contest makes people compete with each other, and a sprint is all about collaboration instead.

Luckily I chatted a bit online before the conference and quickly enough found another developer who wanted to work with me on open source stuff, turning it into a proper sprint after all. We put together this little library as a result. I also met Dan Abramov. I'll get back to him later.

When I arrived at the beautiful Mozilla offices in Paris where the sprint was held, it felt like a library -- everybody was quietly nestled behind their own laptop. I was afraid to speak, though characteristically that didn't last long. I may have made a comment that I thought hackathons aren't supposed to be ibraries. We got a bit more noise after that.

I thoroughly enjoyed this sprint (as that is what it became after all), and learned a lot. Meanwhile the hackathon went well too for the three Dutch friends I traveled with -- they won the contest!

React Europe organizers, I'd like to request a bit more room for sprint-like collaboration at the next conference. In open source we want to emphasize collaboration more than competition, don't we?

Conference

The quality of the talks of the conference was excellent; they got me thinking, which I enjoy. I'll discuss some of the trends and list a few talks that stood out to me personally; my selection has more to do with my personal interests than the quality of the talks I won't mention, though.

Inline styles and animations

Michael Chan gave a talk about Inline Styles. React is about encapsulating bits of UI into coherent components. But styling was originally left up to external CSS, apart from the components. It doesn't have to be. The React community has been exploring ways to put style information into components as well, in part replacing CSS altogether. This is definitely a rethinking of best practices that will cause some resistance, but definitely very interesting. I will have to explore some of the libraries for doing this that have emerged in the React community; perhaps they will fit my developer brain better than CSS has so far.

There were two talks about how you might define animations as well with React. I especially liked Cheng Lou's talk where he explored declarative ways to express animations. Who knows, maybe even unstylish programmers like myself will end up doing animation!

GraphQL and Relay

Lee Byron (creator of immutable-js) gave a talk about GraphQL. GraphQL is rethinking client/server communication originating at Facebook. Instead of leaving it up to the web server to determine the shape of the data the client code sees, GraphQL lets that be expressed by the client code. The idea is that the developer of the client UI code knows better that data they need than the server developer does (even if these are the same person). This has some performance benefits as well as it can be used to minimize network traffic. Most important to be me is that it promises a better way of client UI development: the data arrives in the shape the client developer needs already.

Lee announced the immediate release of a GraphQL introduction, GraphQL draft specification and a reference implementation in JavaScript, resolving a criticism I had in a previous blog post. I started reading the spec that night (I had missed out on the intro; it's a better place to start!).

Joseph Savona gave a talk about the Relay project, which is a way to integrate GraphQL with React. The idea is to specify what data a component needs not only on the client, but right next to the UI components that need it. Before the UI is rendered, the required data is composed into a larger GraphQL query and the data is retrieved. Relay aims to solve a lot of the hard parts of client/server communication in a central framework, making various complexities go away for UI developers. Joseph announced an open source release of Relay for around August. I'm looking forward to learn more about Relay then.

Dan Schafer and Nick Schrock gave a talk about what implementing a GraphQL server actually looks like. GraphQL is a query language, not a database system. It is designed to integrate with whatever backend services you already have, not replace them. This is good as it requires much less buy-in and you can evolve your systems towards GraphQL incrementally -- this was Facebook's own use case as well. To expose your service's data as GraphQL you need to give a server GraphQL framework a description of what your server data looks like and some code on how to obtain this data from the underlying services.

Both Dan and Nick spent the break after their talk answering a lot of questions by many interested developers, including myself. I spoke to Dan myself and I'm really grateful for all his informative answers.

The GraphQL and Relay developers at Facebook are explicitly hoping to build a real open source community around this technology, and they made a flying start this conference.

Flux

All this GraphQL and Relay stuff is exciting, but the way most people integrate React with backends at present is through variations on the Flux pattern. There were several talks that touched upon Flux during the conference. The talk that stood out was by Dan Abramov, who I mentioned earlier. This talk has already been released as an online video, and I recommend you watch it. In it Dan develops and debugs a client-side application live, and due to the ingenious architecture behind it, he can modify code and see the changes in the application's behavior immediately, without an explicit reload and without having to reenter data. It was really eye-opening.

What makes this style of development possible is a more purely functional approach towards the Flux pattern. Dan started the Redux framework, which focuses on making this kind of thing possible. Instead of definining methods that describe how to store data in some global store object, in Redux you define pure functions instead (reducers) that describe how to transform the store state into a new state.

Dan Abramov is interesting by himself. He has quickly made a big name for himself in the React community by working on all sorts of exciting new software and approaches, while being very approachable at the same time. He's doing open source right. He's also in his early 20s. I'm old enough to have run into very smart younger developers before, so his success is not too intimidating for me. I'll try to learn from what he does right and apply it in my own open source work.

The purely functional reducer pattern was all over the conference; I saw references to it in several other talks, especially Kevin Robinson's talk on simplifying the data layer, which explored the power of keeping a log of actions. It has its applications on both clients and servers.

The React web framework already set the tone: it makes some powerful functional programming techniques surrounding UI state management available in a JavaScript framework. The React community is now mining more functional programming techniques, making them accessible to JavaScript developers. It's interesting times.

Using React's component nature

There were several talks that touch on how you can use React's component nature. Ryan Florence gave an entertaining talk about you can incrementally rewrite an existing client-side application to use React components, step by step. Aria Buckles gave a talk on writing good reusable components with React; I recognized several mistakes I've made in my code and better ways to do it.

Finally in a topic close to my heart Evan Morikawa and Ben Gotow gave a talk about how to use React and Flux to turn applications into extensible platforms. Extensible platforms are all over software development. CMSes are a very common example in web development. One could even argue having an extensible core that supports multiple customers with customizations is the mystical quality that turns an application into "enterprise software".

DX: Developer Experience

The new abbreviation DX was used in a lot of talks. DX stands for "Developer Experience" in analogy with UX standing for "user experience".

I've always thought about this concept as usability for developers: a good library or framework offers a good user interface for developers who want to build stuff with it. A library or framework isn't there just to let developers get some done, but to let them get this stuff done well: smoothly, avoiding common mistakes, and not letting them down when they need to do something special.

I really appreciated the React community's emphasis on DX. Let's make the easy things easy, and the hard things possible, together.

Gender Diversity

This section is not intended as devastating criticism but as a suggestion. I'm not an expert on this topic at all, but I did want to make this observation.

I've attended a lot of Python conferences over the years. The gender balance at these conferences was in the early years much more like the React Europe conference: mostly men with a few women here and there. But in recent years there has been a big push in the Python community, especially in North America, to change the gender balance at these conferences and the community as a whole. With success: these days PyCons in North America attract over 30% women attendees. While EuroPython still has a way to go, last year I already noticed a definite trend towards more women speaking and attending. It was a change I appreciated.

Change like this doesn't happen by itself. React Europe made a good start by adopting a code of conduct. We can learn more about what other conference organizers do. Closer to the React community I've also appreciated the actions of the JSConf Europe organizers in this direction. Some simple ideas include to actively reach out to women to submit their talk and to reach out to user groups.

Of course, for all I know this was in fact done, in which case do please do keep it up! If not, that's my suggestion.

Conclusions

I really enjoyed this conference. There were a lot of interesting talks; too many to go into here. I met a lot of interesting people. Mining functional programming techniques to benefit mere mortal developers such as myself, and developer experience in general, are clearly major trends in the React community.

Now that I'm home I'm looking forward to exploring these ideas and technologies some more. Thank you organizers, speakers and fellow attendees! And then to think the conference will likely get even better next year! I hope I can make it again.

July 03, 2015

Kees Hink: Removing a Dexterity behavior in Generic Setup isn't possible

by Kees Hink at 2015-07-03T09:33:54Z

Post retracted. As Tres remarks, the output posted originally does not indicate a problem in GS but an invalid XML file.   I tried setting the attribute remove="true" on the element value="plone.app.multilingual.dx.interfaces.IDexterityTranslatable" in the property name="behaviors" If the problem does exist, the workaround might be to set purge="true" on the property and list all the behaviors

David "Pigeonflight" Bain: Help, my updated posts keep bubbling to the top of the Planet

by David Bain at 2015-07-03T04:11:00Z

I kept noticing that whenever I updated certain posts they would end up at the top of the Planet Plone RSS feed aggregator. I haven't dug too deeply into the issue, but it seems to be a mixture of the way the Planet is configured and the way default blog feeds are presented by Blogger. Thankfully, the default Blogger feed format can be easily changed. Previously the feed I used for Planet Plone

July 01, 2015

Benoît Suttor: New recipe, collective.recipe.buildoutcache

by Benoît Suttor at 2015-07-01T14:37:05Z

This recipe generate a buildout-cache archive. We use pre-generated buildout-cache folder for speed up buildout duration.

Introduction

This recipe generate a buildout-cache archive. We use pre-generated buildout-cache folder for speed up buildout duration. The archive contains one single buildout-cache folder. In this folder, there are 2 folders:

  • eggs: contains all eggs use by your buildout except eggs which have to be compiled.
  • downloads: contains zip eggs which must be compiled (as AccessControl, lxml, Pillow, ZODB, ...)

Before starting a buildout, we download and extract buildout-cache and use it on our buildout. We add eggs-directory and download-cache parameters on buildout section like this:

[buildout]

eggs-directory = buildout-cache/eggs download-cache = buildout-cache/downloads

 

Use case

In our organization, we have a Jenkins server. We created a Jenkins job which generate buildout-cache.tar.gz2 every night and push it into a file server.

We also use Docker, our Dockerfiles download and untar buildout-cache before starting buildout, so creation of docker image became very faster !

 

How it works

Simply, you have to add an parts with this recipe on your buildout project.

Like this :

[buildout]

parts = ... makebuildoutcachetargz [makebuildoutcachetargz] recipe = collective.recipe.buildoutcache

You can use some parameters for changing name of your archive, use another working directory than ./tmp or use another buildout file than buildout.cfg for eggs downloads, See https://github.com/collective/collective.recipe.buildoutcache.

For recipe installation you can make this command line:

./bin/buildout install makebuildoutcachetargz

And start recipe script:

./bin/makebuildoutcachetargz

 

Conclusion

Use collective.recipe.buildoutcache and decrease time lost with your buildout ;)

June 30, 2015

Davide Moro: Python mock library for ninja testing

by davide moro at 2015-06-30T22:05:15Z

If you are going to write
If you are going to write unit tests with Python you should consider this library: Python mock (https://pypi.python.org/pypi/mock).

Powerful, elegant, easy, documented (http://www.voidspace.org.uk/python/mock/)...
and standard: mock is now part of the Python standard library, available as unittest.mock in Python 3.3 onwards.

Simple example

Let's suppose you have an existing validator function based on a dbsession import used for querying a relational database. If you are going to write unit tests, you should focus on units without involving real database connections.

validators.py
from yourpackage import DBSession

def validate_sku(value):
    ...
    courses = DBSession.query(Course).\
        filter(Course.course_sku == value).\
        filter(Course.language == context.language).\
        all()
    # validate data returning a True or False value
    ...

tests/test_validators.py
def test_validator():
    import mock
    with mock.patch('yourpackage.validators.DBSession') as dbsession:
        instance = dbsession
        instance.query().filter().filter().all.return_value = [mock.Mock(id=1)]
        from yourpackage.validators import sku_validator
        assert sku_validator(2) is True

In this case the DBSession call with query, the two filter calls and the final all invocation will produce our mock result (a list of with one mock item, an object with an id attribute).

Brilliant! And this is just one simple example: check out the official documentation for further information:

June 29, 2015

Davide Moro: Pip for buildout folks

by davide moro at 2015-06-29T21:26:16Z

... or buildout for pip folks.
... or buildout for pip folks.

In this article I'm going to talk about how to manage software (Python) projects with buildout or pip.

What do you mean for project?
A package that contains all the application-specific settings, database configuration, which packages your project will need and where they lives.
Projects should be managed like a software if you want to assure the needed quality:
This blog post is not:
  • intended to be a complete guide to pip or buildout. If you want to know more about pip or buildout
  • talking about how to deploy remotely your projects

Buildout

I've been using buildout for many years and we are still good friends.
Buildout definition (from http://www.buildout.org):
"""
Buildout is a Python-based build system for creating, assembling and deploying applications from multiple parts, some of which may be non-Python-based. It lets you create a buildout configuration and reproduce the same software later. 
"""
With buildout you can build and share reproducible environments, not only for Python based components.

Before buildout (if I remember well the first time I get started to use buildout was in 2007, probably during the very first Plone Sorrento sprint) it was a real pain sharing a complete and working developing environment pointing to the right version of several repositories, etc. With buildout it was questions of minutes.

From https://pypi.python.org/pypi/mr.developer.
Probably with pip there is less fun because there isn't a funny picture that celebrates it?!
Buildout configuration files are modular and extensible (not only on per-section basis). There are a lot of buildout recipes, probably the one I prefer is mr.developer (https://pypi.python.org/pypi/mr.developer). It allowed me to fetch different versions of the repositories depending on the buildout profile in use, for example:
  • production -> each developed private egg point to a tag version
  • devel -> the same eggs point to the develop/master
You can accomplish this thing creating different configurations for different profiles, like that:

[buildout]

...

[sources]

your_plugin = git git@github.com:username/your_plugin.git

...

I don't like calling ./bin/buildout -c [production|devel].cfgwith the -c syntax because it is too much error prone. I prefer to create a symbolic link to the right buildout profile (called buildout.cfg) and you'll perform the same command both in production or during development always typing:

$ ./bin/buildout

This way you'll avoid nasty errors like launching a wrong profile in producion. So use just the plain ./bin/buildout command and live happy.

With buildout you can show and freeze all the installed versions of your packages providing a versions.cfg file.

Here you can see my preferred buildout recipes:
Buildout or not buildout, one of the of the most common needs it is the ability to switch from develop to tags depending on you are in development or production mode and reproduce the same software later. I can't figure out to manage software installations without this quality assurance.

More info: http://www.buildout.org

Pip

Let's see how to create reproducible environments with develop or tags dependencies for production environments with pip (https://pip.pypa.io/en/latest/).

Basically you specify your devel requirements on a devel-requirements.txt file (the name doesn't matter) pointing to the develop/master/trunk on your repository.

There is another file that I call production-requirements (the file name doesn't matter) that it is equivalent to the previous one but:
  • without devel dependencies you don't want to install in production mode
  • tagging your private applications (instead of master -> 0.1.1)
This way it is quite simple seeing which releases are installed in production mode, with no cryptic hash codes.

You can use now the production-requirements.txt as a template for generating an easy to read requirements.txt. You'll use this file when installing in production.

You can create a regular Makefile if you don't want to repeat yourself or make scripts if you prefer:
  • compile Sphinx documentation
  • provide virtualenv initialization
  • launch tests against all developed eggs
  • update the final requirements.txt file
For example if you are particular lazy you can create a script that will create your requirements.txt file using the production-requirements.txt like a template.
This is a simple script, it is just an example, that shows how to build your requirements.txt omitting lines with grep, sed, etc:
#!/bin/bash

pip install -r production-requirements.txt
pip freeze -r production-requirements.txt | grep -v mip_project | sed '1,2d' > requirements.txt
When running this script, you should activate another Python environment in order to not pollute the production requirements list with development stuff.

If you want to make your software reusable and as flexible as possible, you can add a regular setup.py module with optional dependencies, that you can activate depending on what you need. For example in devel-mode you might want to activate an entry point called docs (see -e .[docs] in devel-requirements.txt) with optional Sphinx dependencies. Or in production you can install MySQL specific dependencies (-e .[mysql]).

In the examples below I'll also show how to refer to external requirements file (url or a file).

setup.py

You can define optional extra requirements in your setup.py module.
mysql_requires = [
'MySQL-python',
]

docs_requires = [
'Sphinx',
'docutils',
'repoze.sphinx.autointerface',
]
...

setup(
name='mip_project',
version=version,
...
extras_require={
'mysql': mysql_requires,
'docs': docs_requires,
        ... 
},

devel-requirements.txt

Optional extra requirement can be activated using the [] syntax (see -e .[docs]).
You can also include external requirement files or urls (see -r) and tell pip how to fetch some concrete dependencies (see -e git+...#egg=your_egg).
-r https://github.com/.../.../blob/VERSION/requirements.txt
 
# Kotti
Kotti[development,testing]==VERSION

# devel (to no be added in production)
zest.releaser

# Third party's eggs
kotti_newsitem==0.2
kotti_calendar==0.8.2
kotti_link==0.1
kotti_navigation==0.3.1

# Develop eggs
-e git+https://github.com/truelab/kotti_actions.git#egg=kotti_actions
-e git+https://github.com/truelab/kotti_boxes.git#egg=kotti_boxes
...

-e .[docs]

production_requirements.txt

The production requirements should point to tags (see @VERSION).
-r https://github.com/Kotti/Kotti/blob/VERSION/requirements.txt
Kotti[development,testing]==VERSION

# Third party's eggs
kotti_newsitem==0.2
kotti_calendar==0.8.2
kotti_link==0.1
kotti_navigation==0.3.1

# Develop eggs
-e git+https://github.com/truelab/kotti_actions.git@0.1.1#egg=kotti_actions
-e git+https://github.com/truelab/kotti_boxes.git@0.1.3#egg=kotti_boxes
...

-e .[mysql]
requirements.txt
The requirements.txt is autogenerated based on the production-requirements.txt model file. All the installed versions are appended in alphabetical at the end of the file, it can be a very long list.
All the tag versions provided in the production-requirements.txt are automatically converted to hash values (@VERSION -> @3c1a191...).
Kotti==1.0.0a4

# Third party's eggs
kotti-newsitem==0.2
kotti-calendar==0.8.2
kotti-link==0.1
kotti-navigation==0.3.1

# Develop eggs
-e git+https://github.com/truelab/kotti_actions.git@3c1a1914901cb33fcedc9801764f2749b4e1df5b#egg=kotti_actions-dev
-e git+https://github.com/truelab/kotti_boxes.git@3730705703ef4e523c566c063171478902645658#egg=kotti_boxes-dev
...


## The following requirements were added by pip freeze:
alembic==0.6.7
appdirs==1.4.0
Babel==1.3
Beaker==1.6.4
... 

Final consideration

Use pip to install Python packages from Pypi.

If you’re looking for management of fully integrated cross-platform software stacks, buildout is for you.

With buildout no Python code needed unless you are going to write new recipes (the plugin mechanism provided by buildout to add new functionalities to your software building, see http://buildout.readthedocs.org/en/latest/docs/recipe.html).

Instead with pip you can manage also cross-platform stacks but you loose the flexibility of buildout recipes and inheritable configuration files.

Anyway if you consider buildout too magic or you just need a way to switch from production vs development mode you can use pip as well.

Links

If you need more info have a look at the following urls:
Other useful links:

Update 20150629

If you want an example I've created a pip-based project for Kotti CMS (http://kotti.pylonsproject.org):

Martijn Faassen: Morepath Batching Example

by Martijn Faassen at 2015-06-29T14:48:01Z

Introduction

This post is the first in what I hope will be a series on neat things you can do with Morepath. Morepath is a Python web micro framework with some very interesting capabilities. What we'll look at today is what you can do with Morepath's link generation in a server-driven web application. While Morepath is an excellent fit to create REST APIs, it also works well server aplications. So let's look at how Morepath can help you to create a batching UI.

On the special occasion of this post we also released a new version of Morepath, Morepath 0.11.1!

A batching UI is a UI where you have a larger amount of data available than you want to show to the user at once. You instead partition the data in smaller batches, and you let the user navigate through these batches by clicking a previous and next link. If you have 56 items in total and the batch size is 10, you first see items 0-9. You can then click next to see items 10-19, then items 20-29, and so on until you see the last few items 50-55. Clicking previous will take you backwards again.

In this example, a URL to see a single batch looks like this:

http://example.com/?start=20

To see items 20-29. You can also approach the application like this:

http://example.com/

to start at the first batch.

I'm going to highlight the relevant parts of the application here. The complete example project can be found on Github. I have included instructions on how to install the app in the README.rst there.

Model

First we need to define a few model classes to define the application. We are going to go for a fake database of fake persons that we want to batch through.

Here's the Person class:

class Person(object):
    def __init__(self, id, name, address, email):
        self.id = id
        self.name = name
        self.address = address
        self.email = email

We use the neat fake-factory package to create some fake data for our fake database; the fake database is just a Python list:

fake = Faker()
def generate_random_person(id):
    return Person(id, fake.name(), fake.address(), fake.email())
def generate_random_persons(amount):
    return [generate_random_person(id) for id in range(amount)]
person_db = generate_random_persons(56)

So far nothing special. But next we create a special PersonCollection model that represents a batch of persons:

class PersonCollection(object):
    def __init__(self, persons, start):
        self.persons = persons
        if start < 0 or start >= len(persons):
            start = 0
        self.start = start
    def query(self):
        return self.persons[self.start:self.start + BATCH_SIZE]
    def previous(self):
        if self.start == 0:
            return None
        start = self.start - BATCH_SIZE
        if start < 0:
            start = 0
        return PersonCollection(self.persons, start)
    def next(self):
        start = self.start + BATCH_SIZE
        if start >= len(self.persons):
            return None
        return PersonCollection(self.persons, self.start + BATCH_SIZE)

To create an instance of PersonCollection you need two arguments: persons, which is going to be our person_db we created before, and start, which is the start index of the batch.

We define a query method that queries the persons we need from the larger batch, based on start and a global constant, BATCH_SIZE. Here we do this by simply taking a slice. In a real application you'd execute some kind of database query.

We also define previous and next methods. These give back the previous PersonCollection and next PersonCollection. They use the same persons database, but adjust the start of the batch. If there is no previous or next batch as we're at the beginning or the end, these methods return None.

There is nothing directly web related in this code, though of course PersonCollection is there to serve our web application in particular. But as you notice there is absolutely no interaction with request or any other parts of the Morepath API. This makes it easier to reason about this code: you can for instance write unit tests that just test the behavior of these instances without dealing with requests, HTML, etc.

Path

Now we expose these models to the web. We tell Morepath what models are behind what URLs, and how to create URLs to models:

@App.path(model=PersonCollection, path='/')
def get_person_collection(start=0):
    return PersonCollection(person_db, start)
@App.path(model=Person, path='{id}',
          converters={'id': int})
def get_person(id):
    try:
        return person_db[id]
    except IndexError:
        return None

Let's look at this in more detail:

@App.path(model=PersonCollection, path='/')
def get_person_collection(start=0):
    return PersonCollection(person_db, start)

This is not a lot of code, but it actually tells Morepath a lot:

  • When you go to the root path / you get the instance returned by the get_person_collection function.
  • This URL takes a request parameter start, for instance ?start=10.
  • This request parameter is optional. If it's not given it defaults to 0.
  • Since the default is a Python int object, Morepath rejects any requests with request parameters that cannot be converted to an integer as a 400 Bad Request. So ?start=11 is legal, but ?start=foo is not.
  • When asked for the link to a PersonCollection instance in Python code, as we'll see soon, Morepath uses this information to reconstruct it.

Now let's look at get_person:

@App.path(model=Person, path='{id}',
          converters={'id': int})
def get_person(id):
    try:
        return person_db[id]
    except IndexError:
        return None

This uses a path with a parameter in it, id, which is passed to the get_person function. It explicitly sets the system to expect an int and reject anything else, but we could've used id=0 as a default parameter instead here too. Finally, get_person can return None if the id is not known in our Python list "database". Morepath automatically turns this into a 404 Not Found for you.

View & template for Person

While PersonCollection and Person instances now have a URL, we didn't tell Morepath yet what to do when someone goes there. So for now, these URLs will respond with a 404.

Let's fix this by defining some Morepath views. We'll do a simple view for Person first:

@App.html(model=Person, template='person.jinja2')
def person_default(self, request):
    return {
        'id': self.id,
        'name': self.name,
        'address': self.address,
        'email': self.email
    }

We use the html decorator to indicate that this view delivers data of Content-Type text/html, and that it uses a person.jinja2 template to do so.

The person_default function itself gets a self and a request argument. The self argument is an instance of the model class indicated in the decorator, so a Person instance. The request argument is a WebOb request instance. We give the template the data returned in the dictionary.

The template person.jinja2 looks like this:

<!DOCTYPE html>
<html>
  <head>
    <title>Morepath batching demo</title>
  </head>
  <body>
    <p>
      Name: {{ name }}<br/>
      Address: {{ address }}<br/>
      Email: {{ email }}<br />
    </p>
  </body>
</html>

Here we use the Jinja2 template language to render the data to HTML. Morepath out of the box does not support Jinja2; it's template language agnostic. But in our example we use the Morepath extension more.jinja2 which integrates Jinja2. Chameleon support is also available in more.chameleon in case you prefer that.

View & template for PersonCollection

Here is the view that exposes PersonCollection:

@App.html(model=PersonCollection, template='person_collection.jinja2')
def person_collection_default(self, request):
    return {
        'persons': self.query(),
        'previous_link': request.link(self.previous()),
        'next_link': request.link(self.next()),
    }

It gives the template the list of persons that is in the current PersonCollection instance so it can show them in a template as we'll see in a moment. It also creates two URLs: previous_link and next_link. These are links to the previous and next batch available, or None if no previous or next batch exists (this is the first or the last batch).

Let's look at the template:

<!DOCTYPE html>
<html>
 <head>
   <title>Morepath batching demo</title>
  </head>
  <body>
    <table>
      <tr>
        <th>Name</th>
        <th>Email</th>
        <th>Address</th>
      </tr>
      {% for person in persons %}
      <tr>
        <td><a href="{{ request.link(person) }}">{{ person.name }}</a></td>
        <td>{{ person.email }}</td>
        <td>{{ person.address }}</td>
      </tr>
      {% endfor %}
    </table>
    {% if previous_link %}
    <a href="{{ previous_link }}">Previous</a>
    {% endif %}
    {% if next_link %}
    <a href="{{ next_link }}">Next</a>
    {% endif %}
  </body>
</html>

A bit more is going on here. First it loops through the persons list to show all the persons in a batch in a HTML table. The name in the table is a link to the person instance; we use request.link() in the template to create this URL.

The template also shows a previous and next link, but only if they're not None, so when there is actually a previous or next batch available.

That's it

And that's it, besides a few details of application setup, which you can find in the complete example project on Github.

There's not much to this code, and that's how it should be. I invite you to compare this approach to a batching UI to what an implementation for another web framework looks like. Do you put the link generation code in the template itself? Or as ad hoc code inside the view functions? How clear and concise and testable is that code compared to what we just did here? Do you give back the right HTTP status codes when things go wrong? Consider also how easy it would be to expand the code to include searching in addition to batching.

Do you want to try out Morepath now? Read the very extensive documentation. I hope to hear from you!

Alex Clark: Pillow 2-9-0 Is Almost Out

by Alex Clark at 2015-06-29T00:01:47Z

Pillow 2.9.0 will be released on July 1, 2015.

Pre-release

Please help the Pillow Fighters prepare for the Pillow 2.9.0 release by downloading and testing this pre-release:

Report issues

As you might expect, we'd like to avoid the creation of a 2.9.1 release within 24-48 hours of 2.9.0 due to any unforeseen circumstances. If you suspect such an issue to exist in 2.9.0.dev2, please let us know:

Thank you!

June 27, 2015

Alex Clark: Plone on Heroku

by Alex Clark at 2015-06-27T23:09:24Z

Dear Plone, welcome to 2015

Picture it. The year was 2014. I was incredibly moved and inspired by this blog entry:

Someone had finally done it. (zupo in this case, kudos!) Someone had finally beat me to implementing the dream of git push heroku plone. And I could not have been happier.

But something nagging would not let go: I still didn't fully understand how the buildpack worked. Today I'm happy to say: that nag is gone and I now fully understand how Heroku buildpacks work… thanks to… wait for it… a Buildpack for Plock.

Plock Buildpack

There's a lot of the same things going on in both the Plone Buildpack and the Plock Buildpack, with some exceptions.

Experimental

The Plock buildpack is highly experimental, still in development and possibly innovative. Here's what it currently does:

  • Configures Python user site directory in Heroku cache
  • Installs setuptools in user site
  • Installs pip in user site
  • Installs Buildout in user site
  • Installs Plone in cache
  • Copies cache to build directory
  • Installs a portion of "user Plone" (the Heroku app's buildout.cfg) in the build directory (not the cache)
  • Relies on the app to install the remainder (the Heroku app's heroku.cfg). Most importantly the app runs Buildout which finishes quickly thanks to the cache & configures the port which is only available to the app (not the buildpack.)

Here's an example:

# buildout.cfg
[buildout]
extends = https://raw.github.com/plock/pins/master/plone-4-3
[user]
packages = collective.loremipsum
# heroku.cfg
[buildout]
extends = buildout.cfg
[plone]
http-address = ${env:PORT}
# Procfile
web: buildout -c heroku.cfg; plone console

Opinionated

The Plock Buildpack is built on Plock, an "opinionated" installer for Plone. It may eventually use Plock itself, but currently only uses Plock Pins.

June 26, 2015

Blue Dynamics: Boosting Travis CI: Python and Buildout caching

by Jens W. Klein at 2015-06-26T07:35:00Z

Ariane Start  - Albert Einstein Start 1zc.buildout is Pythons swiss knife to build complex enviroments. MacGywer would love it. Travis CI together with GitHub is a wonderful service for OpenSource projects to do collaborative development hand in hand with Test Driven Development and Continious Integration.

But complex Python builds are taking its time - mainly because of the long list of dependencies and bunch of downloads. It is boring to wait 15minutes for a test

So it was for collective.solr, a package that integrates the excellent Solr open source search platform, (written in Java, from the Apache Lucene project) with the Plone Enterprise CMS. Additional to the complex Plone, it downloads and configures a complete Solr for the test environment.

Since a while Travis CI offers caching on its container based infrastructure.

Using it with buildout is easy once ones knows how.

  1. The file .travis.yaml configures Travis CI, open it.
  2. set language: python if you not already have
  3. an important setting is sudo = false which switches explicit to container based infrastructure. This is default for projects created at Travis CI before 2015-01-01, but explicit is better than implicit!
  4. next the caching is defined. We enable also pip caching. This looks like so
    cache:
      pip: true
      directories:
        - $HOME/buildout-cache
  5. in order to create our caching directories a before-install step is needed. In this step we install buildout too. Note: there is no need to use the old and busted bootstrap.py any longer (except old and busted Plone builds, since 4.3 at least it will work).
    before_install:
      - mkdir -p $HOME/buildout-cache/{eggs,downloads}
      - virtualenv .
      - bin/pip install --upgrade pip setuptools zc.buildout
  6. Next we need to tweak file travis.cfg. This is the buildout configuration file used for travis. Under section [buildout] add the lines:

    eggs-directory = /home/travis/buildout-cache/eggs
    download-cache = /home/travis/buildout-cache/downloads
    Note, the $HOME environment variable is not available as buildout variable, so we need to set this fixed to /home/travis - Travis CI can not guarantee that this will stick for all future. So if there is a way to access environment variables before buildout runs the parts please let me know.

The second time Travis CI builds the project it took about 3 minutes instead of 15!
The full files as we use it for collective.solr:

.travis.yaml

language: python
# with next we get on container based infrastructure, this enables caching
sudo: false
python:
  - 2.7
cache:
  pip: true
  directories:
    - $HOME/buildout-cache
env:
  - PLONE_VERSION=4.3.x SOLR_VERSION=4.10.x
  - PLONE_VERSION=5.0.x SOLR_VERSION=4.10.x
before_install:
  - mkdir -p $HOME/buildout-cache/{eggs,downloads}
  - virtualenv .
  - bin/pip install --upgrade pip setuptools zc.buildout
install:
  - sed -ie "s#plone-x.x.x.cfg#plone-$PLONE_VERSION.cfg#" travis.cfg
  - sed -ie "s#solr-x.x.x.cfg#solr-$SOLR_VERSION.cfg#" travis.cfg
  - bin/buildout -N -t 20 -c travis.cfg
script:
  - bin/code-analysis
  - bin/test
after_success:
  - pip install -q coveralls
  - coveralls

travis.cfg

[buildout]
extends =
    base.cfg
    plone-x.x.x.cfg
    solr.cfg
    solr-x.x.x.cfg
    versions.cfg
parts +=
    code-analysis
# caches, see also .travis.yaml
# one should not depend on '/home/travis' but it seems stable in containers.
eggs-directory = /home/travis/buildout-cache/eggs
download-cache = /home/travis/buildout-cache/downloads
[code-analysis]
recipe = plone.recipe.codeanalysis
pre-commit-hook = False
# return-status-codes = True

We may enhance this, so you can always look at its current state at github/collective/collective.solr.

image by "Albert Einstein - Start 1" by DLR German Aerospace Center at Flickr

June 25, 2015

Andreas Jung: The Docker way on dealing with "security"

2015-06-25T04:04:10Z

RANT: The Docker developers are so serious about security

Davide Moro: Kotti CMS - how to turn your Kotti CMS into an intranet

by davide moro at 2015-06-23T15:44:57Z

In the previous posts we have seen that Kotti is a minimal but robust high-level Pythonic web application framework based on Pyramid that includes an
In the previous posts we have seen that Kotti is a minimal but robust high-level Pythonic web application framework based on Pyramid that includes an extensible CMS solution, both user and developer friendly. For developer friendly I mean that you can be productive in one or two days without any knowledge of Kotti or Pyramid if you already know the Python language programming.

If you have to work relational databases, hierarchical data, workflows or complex security requirements Kotti is your friend. It uses well know Python libraries.

In this post we'll try to turn our Kotti CMS public site into a private intranet/extranet service.

I know, there are other solutions keen on building intranet or collaboration portals like Plone (I've been working 8 years on large and complex intranets, big public administration customers with thousands of active users and several editor teams, multiple migrations, etc) or the KARL project. But let's pretend that in our use case we have simpler requirements and we don't need too complex solutions, features like communities, email subscriptions or similar things.

Thanks to the Pyramid and Kotti's architectural design, you can turn your public website into an intranet without having to fork the Kotti code: no forks!

How to turn your site into an intranet

This could be an hard task if you use other CMS solutions, but with Kotti (or the heavier Plone) it will requires you just 4 steps:
  1. define a custom intranet workflow
  2. apply your custom worklows to images and files (by default they are not associated to any workflow, so once added they are immediatly public) 
  3. set a default fallback permission for all views
  4. override the default root ACL (populators)

1 - define a custom intranet workflow

Intranet workflows maybe different depending on your organization requirements. It might be very simple or with multiple review steps.

The important thing is: no more granting the view permission for anonymous users, unless you are willing to define an externally published state

With Kotti you can design your workflow just editing an xml file. For further information you can follow the Kotti CMS - workflow reference article.

2 - apply your custom workflow to images and files

By default they are not associated to any workflow, so once added they are immediately public.

This step will requires you just two additional lines of code in your includeme or kotti_configure function.

Already described here: Kotti CMS - workflow reference, see the "How to enable the custom workflow for images and files" section.

3 - set a default fallback permission

In your includeme function you just need to tell the configurator to set a default permission even for public views already registered.

I mean that if somewhere into the Kotti code there is any callable view not associated to a permission, it won't be accessible by anonymous after this step.

In your includeme function you'll need to :
def includeme(config):
    ...
    # set a default permission even for public views already registered
    # without permission
    config.set_default_permission('view') 
If you want to bypass the default permission for certain views, you can decorate them with a special permission (NO_PERMISSION_REQUIRED from pyramid.security) which indicates that the view should always be executable by entirely anonymous users, regardless of the default permission. See:

4 - override the default root ACL (populators)

The default Kotti's ACL associated with the root of the site
from kotti.security import SITE_ACL
gives view privileges to every user, including anonymous.
You can override this configuration to require users to log in before they can view any of your site's pages. To achieve this, you'll have to set your site's ACL as shown on the following url:
You'll need you add or override the default populator. See the kotti.populators options here:

Results

After reading this article you should be able to close your Kotti site for anonymous users and obtaining a simple, private intranet-like area.

Off-topic: you can also use Kotti as a content backend-only administration area for public websites, with a complete decoupled frontend solution.

UPDATE 20150623: now you can achieve the same goals described in this article installing kotti_backend. See https://github.com/Kotti/kotti_backend

Useful links

All posts about Kotti

All Kotti posts published by @davidemoro:


    Blue Dynamics: Speedup "RelStorage with memcached" done right

    by Jens W. Klein at 2015-06-23T12:55:00Z

    Part of PunchcardRelStorage is a great option as backend for  ZODB. RelStorage uses shared Memcached as second level cache for all instances storing to the same database.

    In comparision a classical ZEO-Client with its ZEO-Server as backend uses one  filesystem cache per running instance (shared by all connection-pools of this instance). In both (ZEO/ RelStorage) cases pickled objects are stored in the cache. ZEO writes the pickles to the filesystem which takes its time unless you're using a RAM-disk. So while reading back its probably in RAM (OS-level disk-caches), but you can not know. Having enough free RAM helps here, but prediction is difficult. Also the one cache per-instance limitation while running 2 or more instances for some larger site makes this way of caching less efficient.

    Additionally sysadmins usally are hating ZEO-Server (because its exotic) and loving PostgreSQL (well documented 'standard' tech they know how to work with) - a good reason to use PostgreSQL. On the ZEO-client side there are advantages too. While first level connection cache is the same as a usal ZEO-client, the second level cache is shared between all ZEO-clients.

    [apt-get|yum] install memcached - done. Really?

    No! We need to choose between pylibmc and python-memcached. But which one is better?

    Also memcached is running on the same machine as the instances! So we can use unix sockets instead of TCP/IP. But what is better? 

    Benchmarking FTW!

    I did some benchmarks. Assuming we have random data to write and read with different keys and also want to check if the overhead accessing non-existent keys has an effect. I quickly put together a little script giving me numbers. After configuring two similar memcached each with 1024MB, one with tcp and the other as socket I run this script and got the following result:

    Benchmark pylibmc vs. python-memcached

    In short:

    • memcached with sockets is ~30% faster than memcached with tcp
    • pylibmc is significant faster with tcp
    • python-memcached is slighlty faster with sockets

    Now this was valid on my machine. How does it behave in different environments? If you want to try/tweak the script and have similar or different results, please let me now!

    Overall RelStorage will be faster if configured with sockets. If this is not possible choosing the right library will speedup things a least a bit.

    Picture at top by Gregg Sloan at Flickr

     

    June 22, 2015

    Paul Roeland: Tokyo symposium, a very Plone event

    2015-06-22T10:46:43Z

    Last week, I had the pleasure to participate in the first Plone symposium held in Asia. 

    It started already on Friday, when we (Eric Steele, Alexander Loechel, Sven Strack and me) were invited into the CMScom offices by the organizers, Takeshi Yamamoto and Manabu Terada. 

    There, we met other people, like Masami Terada (whom I first met at the Bristol conference) and were treated to some great cakes. All the while having an inspired conversation on the Japanese Plone community, Pycon JP and Pycon APAC.

    Later, at a rather nice restaurant, we were joined by more people, including Tres Seaver and several of the other speakers and staff of the Symposium.

    The following morning we headed for Ochanomizu University, who had not only supplied the meeting space, but thoughtfully also some cute turtles and a sleepy red cat to enhance the breaks.

    The Symposium itself was astounding, especially when you consider it was the first time it was held. With 67 people participating from as far away as Osaka and beyond and a wide range of talks, both in Japanese and English, there was something to be found for everyone.

    Personal highlights:

    • meeting the energetic Chiaki Ishida. Apart from effortlessly chairing the high-quality panel on Plone in higher education, she has been instrumental in using and promoting Plone as Learning Management System at her university and beyond. She also works with professor Toru Miura, whose talk on using Plone for improving his lecturing programme impressed with a nice mix of experience and evidence.
    • Max Nakane, a developer who is dependent on assistive technology, gave a great speech on why accessibility matters and where the accessibility of Plone and other systems stand, and what should be improved. Not only that, the next day I had the chance to work directly with him on the Sprint, and identify issues still open for Plone 5.
    • Tres gave what he described as “the least technical talk I ever held”. Yet still, after it, I finally understand where bobo_traverse comes from… I shudder to think what happens if I see more of his technical talks ;-)
      With some good storytelling, it emphasized the community values of our tribes.

    That was also the feeling that ran through the day. Not only in lovely details like hand-made Plone cookies but mostly in the talks and in the hallway conversations, this is a community aimed at helping each other. Nice touch also to include talks on other Python frameworks and technologies.

    After Lightning talks (always fun, even in a foreign language!) most of us headed for the afterparty at a nearby Izakaya. Where that curious phenomenon happened again: people trying to tell you that their English is not so good, and then conversing with you in super good English…

    It was fun to meet such a diverse group of people, and see that the “put a bunch of Plone people in a room and good conversations happen” principle is universally applicable.

    Next day was Sprinting day. Despite knowing that sprinting is not yet as popular as it should be within the Japanese community, a pretty nice number turned up, and we all set out on our various tasks.

    As said before, I mostly worked with Max Nagane on accessibility issues. The good news: Plone 5 will be a massive step in the right direction. But obviously, there is still room for improvement. If you want to help, join the Anniversary sprint in Arnhem/NL when you can, or just start working on the relatively small list in the metaticket.

    The next day unfortunately it was time already to fly back, feeling a bit sad to leave but very happy to have met the vibrant, kind and knowledgeable Plone Japan community. Of whom I hope to see many again in the future, both inside and outside Japan.

    And who knows, apparently “Tokyo 2020″ is a thing now ;-)