Planet Plone

This is where developers and integrators write about Plone, and is your best source for news and developments from the community.

April 23, 2014

Connexions Developers Blog: OpenStax CNX Featured On Floss Weekly Podcast

by Ed Woodward at 2014-04-23T21:10:08Z


Two OpenStax team members were featured guests on the Floss Weekly podcast this morning.  Floss Weekly covers open source software for the TWIT podcast network.  Kathi Fletcher and Ross Reedstrom told the history of the project along with a discussion of the work we are currently doing on OpenStax CNX. The podcast is available to view or download from TWIT.

Quintagroup: Collective.easyform

by zoriana at 2014-04-23T14:51:52Z

Collective.easyform-logo.pngPlone developers constantly search for more efficient ways of Plone performance. Dexterity is a new platform for content types in Plone and will be used instead of Archetypes in Plone 5. As a result there is necessity to create custom forms using Dexterity.
Quintagroup offers new Plone product - collective.easyform that generates web forms that save or mail form input. Easyform provides a Plone form builder through-the-web using fields, widgets, actions and validators.

How to use:

Created-Easyform-Fields.png

  • Select Easyform from the Add new drop-down menu. Choose form title, description and other settings.
  • Add fields or fieldsets to create a unique form that will meet your particular requirements. There are enough basic field types to satisfy any demands: File Upload, Text line (String), Integer, Yes/No, Date, Date/Time, Floating-point number, Choice, Rich Text, Image, Multiple Choice, Text, Password, ReСaptcha field.
  • Continue to customize form by setting the order of fields, defining required and hidden ones, choosing validator, if necessary, and other field type specific settings.

Check out our video tutorial on collective.easyform

Try it yourself!

Collective.easyform is compatible with Plone 4.3.2. It is distributed as a Python egg and can easily be installed into your buildout similarly to other Plone packages. Visit the following pages to find more information about collective.easyform:

Josh Johnson: Centralized Ansible Management With Knocd + Auto-provisioning with AWS

by jjmojojjmojo at 2014-04-23T11:47:16Z

Ansible is a great tool. We’ve been using it at my job with a fair amount of success. When it was chosen, we didn’t have a requirement for supporting Auto scaling groups in AWS. This offers a unique problem – we need machines to be able to essentially provision themselves when AWS brings them up. This has interesting implications outside of AWS as well. This article covers using the Ansible API to build just enough of a custom playbook runner to target a single machine at a time, and discusses how to wire it up to knockd, a “port knocking” server and client, and finally how to use user data in AWS to execute this at boot – or any reboot.

Ansible – A “Push” Model

Ansible is a configuration management tool used in orchestration of large pieces of infrastructure. It’s structured as a simple layer above SSH – but it’s a very sophisticated piece of software. Bottom line, it uses SSH to “push” configuration out to remote servers – this differs from some other popular approaches (like Chef, Puppet and CFEngine) where an agent is run on each machine, and a centralized server manages communication with the agents. Check out How Ansible Works for a bit more detail.

Every approach has it’s advantages and disadvantages – discussing the nuances is beyond the scope of this article, but the primary disadvantage that Ansible has is one of it’s strongest advantages: it’s decentralized and doesn’t require agent installation. The problem arises when you don’t know your inventory (Ansible-speak for “list of all your machines”) beforehand. This can be mitigated with inventory plugins. However, when you have to configure machines that are being spun up dynamically, that need to be configured quickly, the push model starts to break down.

Luckily, Ansible is highly compatible with automation, and provides a very useful python API for specialized cases.

Port Knocking For Fun And Profit

Port knocking is a novel way of invoking code. It involves listening to the network at a very low level, and listening for attempted connections to a specific sequence of ports. No ports are opened. It has its roots in network security, where it’s used to temporarily open up firewalls. You knock, then you connect, then you knock again to close the door behind you. It’s very cool tech.

The standard implementation of port knocking is knockd, included with  most major linux distributions. It’s extremely light weight, and uses a simple configuration file. It supports some interesting features, such as limiting the number of times a client can invoke the knock sequence, by commenting out lines in a flat file.

User Data In EC2

EC2 has a really cool feature called user data, that allows you to add some information to an instance upon boot. It works with cloud-init (installed on most AMIs) to perform tasks and run scripts when the machine is first booted, or rebooted.

Auto Scalling

EC2 provides a mechanism for spinning up instances based on need (or really any arbitrary event). The AWS documentation gives a detailed overview of how it works. It’s useful for responding to sudden spikes in demand, or contracting your running instances during low-demand periods.

Ansilbe + Knockd = Centralized, On-Demand Configuration

As mentioned earlier, Ansible provides a fairly robust API for use in your own scripts. Knockd can be used to invoke any shell command. Here’s how I tied the two together.

Prerequisites

All of my experimentation was done in EC2, using the Ubuntu 12.04 LTS AMI.

To get the machine running ansible configured, I ran the following commands:

$ sudo apt-get update
$ sudo apt-get install python-dev python-pip knockd
$ sudo pip install ansible

Note: its important that you install the python-dev package before you install ansible. This will provide the proper headers so that the c-based SSH library will be compiled, which is faster than the pure-python version installed when the headers are not available.

You’ll notice some information from the knockd package regarding how to enable it. Take note of this for final deployment, but we’ll be running knockd manually during this proof-of-concept exercise.

On the “client” machine, the one who is asking to be configured, you need only install knockd. Again, the service isn’t enabled by default, but the package provides the knock command.

EC2 Setup

We require a few things to be done in the EC2 console for this all to work.

First, I created a keypair for use by the tool. I called “bootstrap”. I downloaded it onto a freshly set up instance I designated for this purpose.

NOTE: It’s important to set the permissions of the private key correctly. They must be set to 0600.

I then needed to create a special security group. The point of the group is to allow all ports from within the current subnet. This gives us maximum flexibility when assigning port knock sequences.

Here’s what it looks like:

Depending on our circumstances, we would need to also open up UDP traffic as well (port knocks can be TCP or UDP based, or a combination within a sequence).

For the sake of security, a limited range of a specific type of connection is advised, but since we’re only communicating over our internal subnet, the risk here is minimal.

Note that I’ve also opened SSH traffic to the world. This is not advisable as standard practice, but it’s necessary for me since I do not have a fixed IP address on my connection.

Making It Work

I wrote a simple python script that runs a given playbook against a given IP address:

"""
Script to run a given playbook against a specific host
"""
import ansible.playbook
from ansible import callbacks
from ansible import utils
import argparse
import os, sys
parser = argparse.ArgumentParser(
    description="Run an ansible playbook against a specific host."
)
parser.add_argument(
    'host',
    help="The IP address or hostname of the machine to run the playbook against."
)
parser.add_argument(
    "-p",
    "--playbook",
    default="default.yml",
    metavar="PLAY_BOOK",
    help="Specify path to a specific playbook to run."
)
parser.add_argument(
    "-c",
    "--config_file",
    metavar="CONFIG_FILE",
    default="./config.ini",
    help="Specify path to a config file. Defaults to %(default)s."
)
def run_playbook(host, playbook, user, key_file):
    """
    Run a given playbook against a specific host, with the given username
    and private key file.
    """
    stats = callbacks.AggregateStats()
    playbook_cb = callbacks.PlaybookCallbacks(verbose=utils.VERBOSITY)
    runner_cb = callbacks.PlaybookRunnerCallbacks(stats, verbose=utils.VERBOSITY)
    pb = ansible.playbook.PlayBook(
        host_list=[host,],
        playbook=playbook,
        forks=1,
        remote_user=user,
        private_key_file=key_file,
        runner_callbacks=runner_cb,
        callbacks=playbook_cb,
        stats=stats
    )
    pb.run()
options = parser.parse_args()
playbook = os.path.abspath("./playbooks/%s" % options.playbook)
run_playbook(options.host, playbook, 'ubuntu', "./bootstrap.pem")

Most of the script is user-interface code, using argparse to bring in configuration options. One unimplemented feature is using an INI file to specify things like the default playbook, pem key, user, etc. These things are just hard coded in the call to run_playbook for this proof-of-concept implementation.

The real heart of the script is the run_playbook function. Given a host (IP or hostname), a path to a playbook file (assumed to be relative to a “plabooks” directory), a user and a private key, it uses the Ansible API to run the playbook.

This function represents the bare-minimum code required to apply a playbook to one or more hosts. It’s surprisingly simple – and I’ve only scratched the surface here of what can be done. With custom callbacks, instead of the ones used by the ansible-playbook runner, we can fine tune how we collect information about each run.

The playbook I used for testing this implementation is very simplistic (see the Ansible playbook documentation for an explaination of the playbook syntax):

---
- hosts: all
  sudo: yes
  tasks:
  - name: ensure apache is at the latest version
    apt: update_cache=yes pkg=apache2 state=latest
  - name: drop an arbitrary file just so we know something happened
    copy: src=it_ran.txt dest=/tmp/ mode=0777

It just installs and starts apache, does an apt-get update, and drops a file into /tmp to give me a clue that it ran.

Note that the hosts: setting is set to “all” – this means that this playbook will run regardless of the role or class of the machine. This is essential, since, again, the machines are unknown when they invoke this script.

For the sake of simplicity, and to set a necessary environment variable, I wrapped the call to my script in a shell script:

#!/bin/bash
export ANSIBLE_HOST_KEY_CHECKING=False
cd /home/ubuntu
/usr/bin/python /home/ubuntu/run_playbook.py $1 >> $1.log 2>&1

The $ANSIBLE_HOST_KEY_CHECKING environment variable here is necessary, short of futzing with the ssh configuration for the ubuntu user, to tell Ansible to not bother verifying host keys. This is required in this situation because the machines it talks to are unknown to it, since the script will be used to configure newly launched machines. We’re also running the playbook unattended, so there’s no one to say “yes” to accepting a new key.

The script also does some very rudimentary logging of all output from the playbook run – it creates logs for each host that it services, for easy debugging.

Finally, the following configuration in knockd.conf makes it all work:

[options]
        UseSyslog
[ansible]
        sequence    = 9000, 9999
        seq_timeout = 5
        Command     = /home/ubuntu/run.sh %IP%

The first configuration section [options], is special to knockd – its used to configure the server. Here we’re just asking knockd to log message to the system log (e.g. /var/log/messages).

The [ansible] section sets up the knock sequence for an machine that wants Ansible to configure it. The sequence set here (it can be anything – any port number and any number of ports >= 2) is 9000, 9999. There’s a 5 second timeout – in the event that the client doing the knocking takes longer than 5 seconds to complete the sequence, nothing happens.

Finally, the command to run is specified. The special %IP% variable is replaced when the command is executed by the IP address of the machine that knocked.

At this point, we can test the setup by running knockd. We can use the -vD options to output lots of useful information.

We just need to then do the knocking from a machine that’s been provisioned with the bootstrap keypair.

Here’s what it looks like (these are all Ubuntu 12.04 LTS instances):

On the “server” machine, the one with the ansible script:

$  sudo knockd -vD
config: new section: 'options'
config: usesyslog
config: new section: 'ansible'
config: ansible: sequence: 9000:tcp,9999:tcp
config: ansible: seq_timeout: 5
config: ansible: start_command: /home/ubuntu/run.sh %IP%
ethernet interface detected
Local IP: 172.31.31.48
listening on eth0...

On the “client” machine, the one that wants to be provisioned:

$ knock 172.31.31.48 9000 9999

Back on the server machine, we’ll see some output upon successful knock:

2014-03-23 10:32:02: tcp: 172.31.24.211:44362 -> 172.31.31.48:9000 74 bytes
172.31.24.211: ansible: Stage 1
2014-03-23 10:32:02: tcp: 172.31.24.211:55882 -> 172.31.31.48:9999 74 bytes
172.31.24.211: ansible: Stage 2
172.31.24.211: ansible: OPEN SESAME
ansible: running command: /home/ubuntu/run.sh 172.31.24.211

 

Making It Automatic With User Data

Now that we have a way to configure machines on demand – the knock could happen at any time, from a cron job, executed via a distributed SSH client (like fabric), etc – we can use the user data feature of EC2 with cloud-init to do the knock at boot, and every reboot.

Here is the user data that I used, which is technically cloud config code (more examples here):

#cloud-config
packages:
 - knockd
runcmd:
 - knock 172.31.31.48 9000 9999

User data can be edited at any time as long as an EC2 instance is in the “stopped” state. When launching a new instance, the field is hidden in Step 3, under “Advanced Details”:

User Data FieldOnce this is established, you can use the “launch more like this” feature of the AWS console to replicate the user data.

This is also a prime use case for writing your own provisioning scripts (using something like boto) or using something a bit higher level, like CloudFormation.

Auto Scaling And User Data

Auto Scaling is controlled via “auto scaling groups” and “launch configuration”. If you’re not familiar these can sound like foreign concepts, but they’re quite simple.

Auto Scaling Groups define how many instances will be maintained, and set up the events to scale up or down the number of instances in the group.

Launch Configurations are nearly identical to the basic settings used when launching an EC2 instance, including user data. In fact, user data is entered in on Step 3 of the process, in the “Advanced Details” section, just like when spinning up a new EC2 instance.

In this way, we can automatically configure machines that come up via auto scaling.

Conclusions And Next Steps

This proof of concept presents an exciting opportunity for people who use Ansible and have use cases that benefit from a “pull” model – without really changing anything about their setup.

Here are a few miscellaneous notes, and some things to consider:

  • There are many implementations of port knocking, beyond knockd. There is a huge amount of information available to dig into the concept itself, and it’s various implementations.
  • The way the script is implemented, it’s possible to have different knock sequences execute different playbooks. A “poor-man’s” method of differentiating hosts.
  • The Ansible script could be coupled the AWS API to get more information about the particular host it’s servicing. Imagine using a tag to set the “class” or “role” of the machine. The API could be used to look up that information about the host, and apply playbooks accordingly. This could also be done with variables – the values that are “punched in” when a playbook is run. This means one source of truth for configuration – just add the relevant bits to the right tags, and it just works.
  • I tested this approach with an auto scaling group, but I’ve used a trivial playbook and only launched 10 machines at a time – it would be a good idea to test this approach with hundreds of machines and more complex plays – my “free tier” t1.micro instance handled this “stampeding herd” without a blink, but it’s unclear how this really scales. If anyone gives this a try, please let me know how it went.
  • Custom callbacks could be used to enhance the script to send notifications when machines were launched, as well as more detailed logging.

April 22, 2014

Plone.org: Plone Docs Get Monumental Overhaul

2014-04-22T20:10:00Z

A summary of DocSprint Munich.

April 21, 2014

JC Brand: Plone 4 compatible release of Quills

2014-04-21T09:41:00Z

/img/quills.png

Since I've started this site, I've been using Quills for blogging functionality in Plone. So far, it's been perfectly fine for my relatively simple needs and I haven't found a reason to change. Unfortunately, except for some work to port Quills to Plone 4, development and bugfixes on Quills has completely dried up. No new releases were made for more than a year and the mailing list was almost dead. I only realised this when I fixed some bugs while porting my site to Plone 4 and couldn't find anybody to help making a new Plone 4 compatible release.

Eventually I got hold of Tim Hicks, one of the original maintainers of Quills, and I offered to help out with maintaining Quills and to make the new releases.

So, I'm happy to announce that new Plone 4 compatible releases of the Quills packages have been made. Notably Products.Quills, Products.QuillsEnabled and quills.app (all now at version 1.8a1). This new release is labeled alpha, as I'm still very new to the Quills codebase, but I'm already using it on this blog without any major problems.

If there is anyone who is still using Quills on a legacy Plone site and would like to migrate to Plone 4, just pin your Products.Quills (and Products.QuillsEnabled) version to 1.8a1 in your versions.cfg or bulidout.cfg.

Hopefully everything goes well (did I mention it's alpha ;) but if not, feel free to let me know and I'll try to help out.

April 18, 2014

Quintagroup: collective.contact.core

by zoriana at 2014-04-18T13:07:57Z

collective.contact.core is a  Plone add-on that helps to manage organizations and staff in Plone (main developers are Vincent Fretin and Cedric Messiant). This product provides directory that can contain contact information for different content types: organizations/sub-organizations, persons,and positions. Contact info option depends on for which content types you set the IContactDetails behavior so it can cover many different uses.

Easy in use:

  1. Add directory to your website and insert all the additional information required. You’ll need to specify types of positions and organizations that will be used (e.g. Faculty/Staff/Students for universities). Don’t worry about filling out the form, since it can be edited at any time later.

    collective.contact.core edit.png

  2. Create organization(s) in the directory. Depending on the hierarchy, add other organizations (they may correspond to units, divisions, departments, etc.). An organization can contain position (e.g Dean, secretary, SEO) that will be connected with person (a physical person). Choose Organization/Position from the Add new drop-down menu or click on Create contact to divaricate your directory.
  3. collective.contact.core navigation.png

A person content type can hold one or more positions or be member of one or more organizations. All contact types have optional fields with variety of contact information, including phone, cell phone, fax, email, address, zip code, etc. Such data management is very suitable for universities.

collective.contact.core.png

collective.contact.core can be useful for all kinds of organizations, despite their size, number of employees or subdivisions. Created directory is easy to manipulate and can be branched or edited at any time.

collective.contact.core adds new content types, but preserves Plone functionality, especially concerning users’ rights. Every ‘organization’ content type is similar to folder, thus you can specify in the Sharing tab what rights users have . Moreover, default Plone search is very efficient when you want to search for a specific person or position on all the website.

Use collective.contact.core to arrange your organization and contact information.

Contributors:

  • Gauthier Bastien, IMIO
  • Vincent Fretin, Ecreall
  • Stéphan Geulette, IMIO
  • Cédric Messiant, Ecreall
  • Frédéric Peters, Entr'ouvert
  • Thomas Desvenain, Ecreall

More information:

April 17, 2014

Agendaless: Pyramid for Plone Developers: Training at Plone Symposium MW 2014

by Agendaless at 2014-04-17T19:44:03Z

We are pleased to be offering a two day training session at the 2014 Plone Symposium Midwest this year. The two-day course will cover Pyramid development topics, aimed at Plone developers.

For details, please see the training page.

Martijn Faassen: Morepath Python 3 support

2014-04-17T12:32:59Z

Thanks to an awesome contribution by Alec Munro, Morepath, your friendly neighborhood Python micro framework with super powers, has just gained Python 3 support!

Developing something new while juggling the complexities of Python 2 and Python 3 in my head at the same time was not something I wanted to do -- I wanted to focus on my actual goals, which was to create a great web framework.

So then I had to pick one version of Python or the other. Since my direct customer use cases involves integrating it with Python 2 code, picking Python 2 was the obvious choice.

But now that Morepath has taken shape, taking on the extra complexity of supporting Python 3 is doable. The Morepath test coverage is quite comprehensive, and I had already configured tox (so I could test it with PyPy). Adding Python 3.4 meant patiently going through all the code and adjusting it, which is what Alec did. Thank you Alec, this is great!

Morepath's dependencies (such as WebOb) already had Python 3 support, so credit goes to their maintainers too (thanks Chris McDonough in particular!). This includes the Reg library, which I polyglotted to support Python 3 myself a few months ago.

All this doesn't take away from my opinion that we need to do more to support the large Python 2 application codebases. They are much harder to transition to Python 3 than well-tested libraries and frameworks, for which the path was cleared in the last 5 years or so.

[update: this is still in git; the Morepath 0.1 release is Python 2 only. But it will be included in the upcoming Morepath 0.2 release]

David "Pigeonflight" Bain: Install Plone in under 5 minutes on Codio.com

by David Bain at 2014-04-17T03:21:42Z

I was introduced to Codio.com by +Rok Garbas. It turns out to be a very nice platform for developing Plone projects. So far what I like is that every Codio box pretty much ships with all the Plone dependencies while at the same time having a full suite of Node based tools (important for modern Javascript development), this is a great time saver on new projects. These are still early days so I

April 14, 2014

Plone.org: Plone Website Accounts Safe from Heartbleed

2014-04-14T21:01:51Z

The plone.org website is safe from the Heartbleed bug and, as such, plone.org passwords have not been disclosed.

Martijn Faassen: The Call of Python 2.8

2014-04-14T11:52:00Z

Introduction

Guido recently felt he needed to re-empathize that there will be no Python 2.8. The Python developers have been very clear for years that there will never be a Python 2.8.

http://legacy.python.org/dev/peps/pep-0404/

At the Python language summit there were calls for a Python 2.8. Guido reports:

We (I) still don't want to do a 2.8 release, and I don't want to accelerate 3.5, but I do think we should make things better for people who have to straddle Python 2 and 3 in a single codebase, by developing more tools, and by security and possibly installer updates to 2.7 (PEP 466).

At his keynote at PyCon, he said it again:

/guido_no.jpg

A very good thing happened to recognize the reality that Python 2.7 is still massively popular: the end of life date for Python 2.7 was changed by Guido to 2020 (it was 2015). In the same change he felt he should repeat there will be no Python 2.8:

+There will be no Python 2.8.

The call for Python 2.8 is strong. Even Guido feels it!

People talk about a Python 2.8, and are for it, or, like Guido, against it, but rarely talk about what it should be. So let's actually have that conversation.

Why talk about something that will never be? Because we can't call for something, nor reject something if we don't know what it is.

What is Python 2.8 for?

Python 2.8 could be different things. It could be a Python 2.x release that reduces some pain points and adds features for Python 2 developers independent from what's going on in Python 3. It makes sense, really: we haven't had a new Python 2 feature release since 2010 now. Those of us with existing large Python 2 codebases haven't benefited from the work the language developers have done in those years. Even polyglot libraries that support Python 2 and 3 both can't use the new features, so are also stuck with a 2010 Python. Before Python 2.7, the release cycle of Python has seen a new compatible release every 2 years or less. The reality of Python for many of its users is that there has been no feature update of the language for years now.

But I don't want to talk about that. I want to talk about Python 2.8 as an incremental upgrade path to Python 3. If we are going to add features to Python 2, let's take them from Python 3. I want to talk about bringing Python 2.x closer to Python 3. Python 2 might never quite reach Python 3 parity, but it could still help a lot if it can get closer incrementally.

Why an incremental upgrade?

In the discussion about Python 3 there is a lot of discussion about the need to port Python libraries to Python 3. This is indeed important if you want the ability to start new projects on Python 3. But many of us in the trenches are working on large Python 2 code bases. This isn't just maintenance. A large code base is alive, so we're building new features in Python 2.

Such a large Python codebase is:

  • Important to some organization. Important enough for people to actually pay developers money to work on Python code.
  • Cannot be easily ported in a giant step to Python 3, even if all external open source libraries are ported.
  • Porting would not see any functional gain, so the organization won't see it as a worthwhile investment.
  • Porting would entail bugs and breakages, which is what the organization would want to avoid.

You can argue that I'm overstating the risks of porting. But we need to face it: many codebases written in Python 2 have low automatic test coverage. We don't like to talk about it because we think everybody else is better at automated testing than we are, but it's the reality in the field.

We could say, fine, they can stay on Python 2 forever then! Well, at least until 2020. I think this would be unwise, as these organizations are paying a lot of developers money to work on Python code. This has an effect on the community as a whole. It contributes to the gravity of Python 2.

Those organizations, and thus the wider Python community, would be helped if there was an incremental way to upgrade their code bases to Python 3, with easy steps to follow. I think we can do much more to support such incremental upgrades than Python 2.7 offers right now.

Python 2.8 for polyglot developers

Besides helping Python 2 code bases go further step by step, Python 2.8 can also help those of us who are maintaining polyglot libraries, which work in both Python 2 and Python 3.

If a Python 2.8 backported Python 3 features, it means that polyglot authors can start using those features if they drop Python 2.7 support right there in their polyglot libraries, without giving up Python 2 compatibility. Python 2.8 would actually help encourage those on Python 2.7 codebases to move towards Python 3, so they can use the library upgrades.

Of course dropping Python 2.x support entirely for a polyglot library will also make that possible. But I think it'll be feasible to drop Python 2.7 support in favor of Python 2.8 much faster than it is possible to drop Python 2 support entirely.

But what do we want?

I've seen Python 3 developers say: but we've done all we could with Python 2.7 already! What do you want from a Python 2.8?

And that's a great question. It's gone unanswered for far too long. We should get a lot more concrete.

What follows are just ideas. I want to get them out there, so other people can start thinking about them. I don't intend to implement any of it myself; just blogging about it is already breaking my stress-reducing policy of not worrying about Python 3.

Anyway, I might have it all wrong. But at least I'm trying.

Breaking code

Here's a paradox: I think that in order to make an incremental upgrade possible for Python 2.x we should actually break existing Python 2.x code in Python 2.8! Some libraries will need minor adjustments to work in Python 2.8.

I want to do what the from __future__ pattern was introduced for in the first place: introduce a new incompatible feature in a release but making it optional, and then later making the incompatible feature the default.

The Future is Required

Python 2.7 lets you do from __future__ import something to get the interpreter behave a bit more like Python 3. In Python 2.8, those should be the default behavior.

In order to encourage this and make it really obvious, we may want to consider requiring these in Python 2.8. That means that the interpreter raises an error unless it has such a from __future__ import there.

If we go for that, it means you have to have this on the top of all your Python modules in Python 2.8:

  • from __future__ import division
  • from __future__ import absolute_import
  • from __future__ import print_function

absolute_import appears to be uncontroversial, but I've seen people complain about both division and print_function. If people reject Python 3 for those reasons, I want to make clear I'm not in the same camp. I believe that is confusing at most a minor inconvenience with a dealbreaker. I think discussion about these is pretty pointless, and I'm not going to engage in it.

I've left out unicode_literals. This is because I've seen both Nick Coghlan and Armin Ronacher argue against them. I have a different proposal. More below.

What do we gain by this measure? It's ugly! Yes, but we've made the upgrade path a lot more obvious. If an organisation wants to upgrade to Python 2.8, they have to review their imports and divisions and change their print statements to function calls. That should be doable enough, even in large code bases, and is an upgrade path a developer can do incrementally, maybe even without having to convince their bosses first. Compare that to an upgrade to Python 3.

from __future3__ import new_classes

We can't do everything with the old future imports. We want to allow more incremental upgrading. So let's introduce a new future import.

New-style classes, that is classes that derive from object, were introduced in Python 2 many years ago, but old-style classes are still supported. Python 3 only has new-style classes. Python 2.8 can help here by making new style classes the default. If you import from __future3__ import new_classes at the top of your module, any class definition in that module that looks like this:

class Foo:
   pass

is interpreted as a new-style class.

This might break the contract of the module, as people may subclass from this class and expect an old-style class, and in some (rare) cases this can break code. But at least those problems can be dealt with incrementally. And the upgrade path is really obvious.

__future3__?

Why did I write __future3__ and not __future__? Because otherwise we can't write polyglot code that is compatible in Python 2 and Python 3.

Python 3.4 doesn't support from __future__ import new_classes. We don't want to wait for a Python 3.5 or Python 3.6 to support this, even there is even any interest in supporting this among the Python language developers at all. Because after all, there won't be a Python 2.8.

That problem doesn't exist for __future3__. We can easily fake a __python3__ module in Python 3 without being dependent on the language developers. So polyglot code can safely use this.

from __future3__ import explicit_literals

Back to the magic moment of Nick Coghlan and Armin Ronacher agreeing.

Let's have a from __future3__ import explicit_literals.

This forces the author to be entirely explicit with string literals in the module that imports it. "foo" and 'foo' are now errors; the module won't import. Instead the module has to be explicit and use b'foo' and u'foo' everywhere.

What does that get us? It forces a developer to think about string literals everywhere, and that helps the codebase become incrementally more compatible with Python 3.

from __future3__ import str

This import line does two things:

  • you get a str function that creates a Python 3 str. This string has unicode text in it and cannot be combined with Python 2 style bytes and Python 3 style bytes without error (which I'll discuss later).
  • if from __future__ import explicit_literals is in effect, a bare literal now creates a Python 3 str. Or maybe explicit_literals is a prerequisite and from __future3__ import str should error if it isn't there.

I took this idea from the Python future module, which makes Python 3 style str and bytes (and much more) available in Python 2.7. I've modified the idea as I have the imaginary power to change the interpreter in Python 2.8. Of course anything I got wrong is my own fault, not the fault of Ed Schofield, the author of the future module.

from __past__ import bytes

To ensure you still have access to Python 2 bytes (really str) just in case you still need it, we need an additional import:

from __past__ import bytes as oldbytes

oldbytes` can be called with Python 2 str, Python 2 bytes and Python 3 bytes. It rejects a Python 3 str. I'll talk about why it can be needed in a bit.

Yes, __past__ is another new namespace we can safely support in Python 3. It would get more involved in Python 3: it contains a forward port of the Python 2 bytes object. Python 3 bytes have less features than Python 2 bytes, and this has been a pain point for some developers who need to work with bytes a lot. Having a more capable bytes object in Python 3 would not hurt existing Python 3 code, as combining it with a Python 3 string would still result in an error. It's just an alternative implementation of bytes with more methods on it.

from __future3__ import bytes

This is the equivalent import for getting the Python 3 bytes object.

Combining Python 3 str/bytes with Python 2 unicode/str

So what happens when we somehow combine a Python 3 str/bytes with a Python 2 str/bytes/unicode? Let's think about it.

The future module by Ed Schofield forbids py3bytes + py2unicode, but supports other combinations and upcasts them to their Python 3 version. So, for instance, py3str + py2unicode -> py3str. This is a consequence of the way it tries to make Python 2 string literals work a bit like they're Python 3 unicode literals. There is a big drawback to this approach; a Python 3 bytes is not fully compatible with APIs that expect a Python 2 str, and a library that tried to use this approach would suffer API breakage. See this issue for more information on that.

I think since we have the magical power to change the interpreter, we can do better. We can make real Python 3 string literals exist in Python 2 using __future3__.

I think we need these rules:

  • py3str + py2unicode -> py3str
  • py3str + py2str: UnicodeError
  • py3bytes + py2unicode: TypeError
  • py3bytes + py2str: TypeError

So while we upcast existing Python 2 unicode strings to Python 3 str we refuse any other combination.

Why not let people combine Python 2 str/bytes with Python 3 bytes? Because the Python 3 bytes object is not compatible with the Python 2 bytes object, and we should refuse to guess and immediately bail out when someone tries to mix the two. We require an explicit Python 2 str call to convert a Python 3 bytes to a str.

This is assuming that the Python 3 str is compatible with Python 2 unicode. I think we should aim for making a Python 3 string behave like a subclass of a Python 2 unicode.

What have we gained?

We can now start using Python 3 str and Python 3 bytes in our Python 2 codebases, incrementally upgrading, module by module.

Libraries could upgrade their internals to use Python 3 str and bytes entirely, and start using Python 3 str objects in any public API that returns Python 2 unicode strings now. If you're wrong and the users of your API actually do expect str-as-bytes instead of unicode strings, you can go deal with these issues one by one, in an incremental fashion.

For compatibility you can't return Python 3 bytes where Python 2 str-as-bytes is used, so judicious use of __past__.str would be needed at the boundaries in these cases.

After Python 2.8

People who have ported their code to Python 2.8 and have turned on all the __future3__ imports incrementally will be in a better place to port their code to Python 3. But to offer a more incremental step, we can have a Python 2.9 that requires the __future3__ imports introduced by Python 2.8. And by then we might have thought of some other ways to smoothen the upgrade path.

Summary

  • There will be no Python 2.8. There will be no Python 2.8! Really, there will be no Python 2.8.
  • Large code bases in Python need incremental upgrades.
  • The upgrade from Python 2 to Python 3 is not incremental enough.
  • A Python 2.8 could help smoothen the way.
  • A Python 2.8 could help polyglot libraries.
  • A Python 2.8 could let us drop support for Python 2.7 with an obvious upgrade path in place that brings everybody closer to Python 3.
  • The old __future__ imports are mandatory in Python 2.8 (except unicode_literals).
  • We introduce a new __future3__ in Python 2.8. __future3__ because we can support it in Python 3 today.
  • We introduce from __future3__ import new_classes, mandating new style objects for plain class statements.
  • We introduce from __future3__ import explicit_literals, str, bytes to support a migration to use Python 3 style str and bytes.
  • We introduce from __past__ import bytes to be able to access the old-style bytes object.
  • A forward port of the Python 2 bytes object to Python 3 would be useful. It would error if combined with a Python 3 str, just like the Python 3 bytes does.
  • A future Python 2.9 could introduce more incremental upgrade steps. But there will be no Python 2.9.
  • I'm not going to do the work, but at least now we have something to talk about.

April 13, 2014

JC Brand: New spelling and grammar checker for TinyMCE

2014-04-13T17:10:55Z

The spellchecker "After the deadline" is now available as an add-on for TinyMCE in Plone. It's multilingual, open-source, platform agnostic and even does grammar checking.

I've actually already added this add-on two months ago but didn't get around to mentioning it.  After the deadline is an open-source spelling and grammar checker that can be integrated with web based WYSIWYG editors, such as our beloved TinyMCE.

This new spelling checker provides some advantages over the existing (and still default) IESpell. Firstly, IESpell only works on Microsoft Windows. Secondly, IESpell is only free for non-commercial purposes. After the deadline however is open-source (GPL license, see here) and platform agnostic. Oh, and it's multilingual as well! To top it off... the killer feature, for us at least, is that it has a grammar checker. Pretty impressive IMHO.

The actual thing doing the spellchecking, is a Java server app, which you should download and install on your own server. There is a public default option for testing and demonstration purposes, but it comes with no guarantees.

After the deadline is included with all Products.TinyMCE releases since 1.2.1, but it's not the default spell checker. To enable it is at least straightforward.

Enabling AtD in TinyMCE for Plone:

  • Go to the Plone control panel and click on "TinyMCE Visual Editor"
  • Click on 'Toolbar' (middle left)
  • Make sure that 'spellchecker' is checked.
  • Click on 'Libraries' (top right)
  • Under Spellchecker plugin to use, choose After the deadline
  • Under AtD Service URL, choose your ATD server's URL. (The default is their - public service)
  • It's however recommended that you install your own AtD spellchecker service
  • See here for more details.
  • You should now have After the deadline enabled and have a spellcheck button in your TinyMCE editor.

If you'd like a demonstration of ATD, click here.

Oh, and if you find any spelling/grammar mistakes in this blogpost, it's because I'm still using Products.TinyMCE 1.1.6 ;)

Jazkarta Blog: Plone 5 and the 2014 Emerald Sprint

by Sally Kleinfeldt at 2014-04-13T03:06:26Z

2014 Emerald Sprint Attendees

Back row l to r: Alec Mitchell, Spanky Kapanka, Eric Steele, Ian Anderson, Ross Patterson, Luke Bannon, Cris Ewing, Andy Leeb, Cal Doval, Chris Calloway. Front row l to r: Elizabeth Leddy, David Glick, Steve McMahon, Fulvio Casali, Franco Pellegrini, Sally Kleinfeldt, Trish Ang. Photo by Trish Ang.

I recently returned from the Emerald Sprint, and I have to say that Plone 5 is starting to look pretty good. For developers, there is a solid core buildout that even I was able to run without a hitch. So if there’s a PLIP (Plone Improvement Proposal) or a feature that interests you, and you’ve been thinking about contributing – do it! The community awaits you with open arms.  And what a great community! You don’t need to be a Python developer – Plone 5 is a Javascript-friendly development environment. We would love to have more Javascript developers and designers join our ranks. You won’t be sorry.

OOTB Plone 5 with the new editing toolbar on the left.  Still a work in progress, you will be able to choose top or side placement, and icons, text, or both.

OOTB Plone 5 with the new editing toolbar on the left. Still a work in progress, you will be able to choose top or side placement, and icons, text, or both.

But UI improvements and new features are the real cause of excitement. The first thing Plonistas will notice is the new theme – people new to Plone won’t find it remarkable, just clean and modern, but we Plone folks have been looking forward to replacing Sunburst for a long time. In fact we’ve been looking forward to Plone 5 for a long time. After the community gained consensus about what Plone 5 will be, things got a bit bogged down. The Javascript rewrite was extensive. The new content type framework (Dexterity) had to gain maturity as a Plone 4 add-on. There was – and still is – much discussion over how to improve and streamline content editing and page layouts; ideas are being implemented as add-on products such as collective.cover.

Over the last few months the community has really picked up the pace on Plone 5.  Supported by the Plone Foundation and numerous sponsors, there have been a series of productive sprints. The Emerald Sprint’s focus was on user management, registration, and login.  A robust system of user permissions, groups, and roles is one of Plone’s most notable (and oldest) features and the concepts and underpinnings are still solid. However the UI is overdue for an overhaul and the old implementation layers have gotten pretty crufty.

Cris Ewing shows a mockup of the new registration process

Cris Ewing shows a mockup of the new registration process

The sprinters, led by David Glick, took a UI-first aproach which was fantastic. Before cracking open the laptops and diving into the code, we developed mockups of the new registration and login process and user management screens. It really helped that 2 of the 17 sprinters were UI/UX designers. Always try to get designers to come to your sprints!

The sprint gathered together a fantastic set of Plone gurus who were able to have in depth discussions of some of the thornier technical problems associated with users. For example, should user objects work like content objects? This discussion resulted in a concrete list of pros and cons and a better understanding of how to ameliorate potential problems if and when we decide to move to “contentish” users.

And of course software got developed. Teams worked on the Users and Groups control panel, member profile design and implementation, registration, log in, and forgotten password dialogs, and more. Read the summary report on plone.org for more details.

Plone 5 login design

Plone 5 log in design.


April 12, 2014

keul: "General Type": a new criterion for Plone Collections

by Luca Fabbri at 2014-04-12T17:59:17Z

A new
A new 1.2 version of plone.app.querystring has been released.
There are some improvement and bugfix but I'm particularly interested in one new feature: customizable parsed query. Why?

Some times ago I started developing a new product for providing some usability enhancement in type categorization using Collection but it was a dead end: it wont work without patching Plone. But the accepted pull request changed everything, so here it is: collective.typecriterion.

The product want to replace the Collection's "Types" search term (adding a new search term called "General type").

The scope of the add-on is to fix some usability issues with Collections:
  • Users don't always understand all of the content types installed in the site
  • User don't always get the difference from a type to another (classical examples: Page and File, or File and Image)
Plone type collection criteriaAlso there are some missing features:
  • There's not way to quickly define a new type alias or exclude types from the list
  • There's no way to group types under a general (but more user friendly) new type
Some of the point above could be reached searching types using interfaces (through object_provides index) instead of portal_type (the attribute that commonly store primitive type name of every content, but:
  • although search by interface is the suggested way to search by types, it's not used anywhere by Plone UI
  • using interfaces lead to inheritance behavior (which is great... until you really want it)
  • sometimes you don't have the right interface to use. For example, there's an ITextContent interface in ATContentTypes, but it's implemented only by Page and News, not by Event. And generating new interfaces is a developer task
The idea is to keep using portal_type but give administrators a way to group and organize them in a more friendly form.

After installation the new control panel entry "Type criterion settings" will be available.
Plone general type control panelThe target of the configuration panel is simple: is possible to group a set of types under the cloak of a new descriptive type name. In the example given in the image we take again definition of a "textual" content (a content that contains rich text data), grouping all the know types.

After the configuration you can start using the new search term.
Plone type collection criteria Usability apart there's also another advantage in this approach, that is the integration with 3rd party products.

Let say that you defined a new general type called "Multimedia" and you configure it as a set that contains Image and Video, and let say that Video went from the installation of redturtle.video product.
After a while you plan a switch from redturtle.video to wildcard.media. What you need to do is simply to change the configuration of the general type, not all the collections in the site.

Finally an interesting note: the code inside the collective.typecriterion is really small. All the magic (one time again) came from Plone.

April 11, 2014

Starzel.de: Mastering Plone

2014-04-11T14:50:00Z

tl;dr: We're giving our three-day "Mastering Plone" training in Munich (in english)

During the course of this three-day training we will teach how to

  • ... wield the awesome features of Plone for maximum effect.
  • ... customize and extend Plone to make it do exactly what you want.
  • ... use the current best-practices to become a Plone rockstar.

In the first part we'll teach the fundamentals needed to setup and manage your own website using the many build-in features of Plone.

The second part will focus on customizations that may be done through-the-web.

The third and longest part will be devoted to Plone-development. Here we focus on current best-practices like Dexterity and Diazo that make developing with Plone fun and accessible.

The training is for people who are new to Plone or used to work with older versions of Plone and want to learn the current best-practices. Basic Python and HTML/CSS skills are a requirement.

The course is an updated and expanded version of the trainings we held at the international Plone-Conferences in Arnhem and Brasilia. The documentation for these can be found at http://starzel.github.io/training

As always, we give the training as a team of two Trainers. This way you'll receive a 1on1 training as soon as something works differently than expected. Something that is not possible with a single trainer and something that adds a lot of insight when something did not work as expected.

If you're interested call us at +49 (0)89 - 189 29 533 or send a mail to

Date:
26. - 28. May 2014

Time:
9:00 - 18:00

Location:
EineWeltHaus
Schwanthalerstr. 80
80336 München

Trainers:
Philip Bauer
Patrick Gerken

Language:
English

Cost:
EUR 1000.- per person plus 19% MwSt (VAT)

Photo: https://www.flickr.com/photos/mindonfire/4447448937