Planet Plone

This is where developers and integrators write about Plone, and is your best source for news and developments from the community.

November 20, 2014

Six Feet Up: Handling PloneFormGen Submissions : The Email Adapter

by Christine Shaw at 2014-11-20T14:00:00Z

Handling PFG submissionsPloneFormGen is a Plone add-on that makes it fast and easy to create forms on your website. With just a few clicks, you can quickly create contact forms, order forms, surveys or any other form for collecting and validating data submitted by site visitors.

What happens after the user submits the form? By default, PloneFormGen displays a generic "Thank You" page which simply lists the data that was entered. It also emails the data provided to the default site email address which is set in Site Setup -> Mail -> Site 'From' address.

With a few clicks, you could customize the form submission to:

  • Display a custom "Thank You" page
  • Configure the mailer adapter to send to different addresses based on data entered in the form
  • Customize the content of emails sent (confirmation to user and/or submission to site personnel)

Customizing the "Thank You" Page

By default, PloneFormGen displays the data the user entered and a simple message when the form is submitted:

Default Thanks

To create a more professional "Thank You" page:

1. Go to the "contents" tab of your form and select "Thank You".

PFG Thanks 1

2. Click the "edit" tab on the "Thank You" page.

Using the "Default" tab, you can:

  • Change the Title of your page (maybe you want the title to say "Order Received")
  • Change or remove the default description ("Thanks for your input.")
  • Use the rich text editors to insert text, images, etc. above and below the display of the data fields submitted (these fields are empty by default)

PFG Thanks 2

 

Choose what to display:

Using the "Fields" tab, you can select whether or not to display the data that was submitted back to the user. If you wish to display the data, the default is to show all fields (whether or not data was entered for each). This may result in some unprofessional items being listed, such as Captcha text if Captcha is used on your form or field names for data the user did not submit.

On the "Fields" tab, you can specify which data elements to select. To display no data at all, simply uncheck "Show all Fields."

To display only certain fields, use the "Show Responses" widget and assign the fields you'd like to display in the right column:

To suppress empty fields from being displayed, uncheck "Include Empties."

PFG Thanks 3

 

Bypassing the "Thank You" Page

If you would prefer to have the user redirected to a page other than the "Thank You" page:

1. On the view of your form, click the "Edit" tab

2. Choose the "Overrides" section

3. In the "Custom Success Action" field, enter the URL you would like to redirect to, preceded by the text "redirect_to:string:"

Override 1

 

Configuring the Mailer Adapter : Recipients

By default, PloneFormGen includes a mailer adapter with the form that emails any content submitted to the default site email address found within Site Setup -> Mail - Site 'From' address.

When you are adding a new form, there is a check box which adds the Mailer adapter to the form. If you do not want the form to send out any email, simply uncheck this box on form creation.

PFG Mailer 0

To stop the form from sending mail:

If you are editing an existing form and decide you do not want it to send email any more, simply edit the form and uncheck "Mailer" under the "Action Adapters" listed. At this point, your form would do nothing but display a "Thank You" page on submission.

Say you do want your form to send emails, but you'd like to customize who receives them and what is sent. You can change the default Mailer behavior by following these steps:

1. To configure the mailer adapter, click on the "Contents" tab of your Form Folder and select "Mailer":

PFG Mailer 1

2. Click the "edit" tab on the Mailer adapter.

On the "Default" tab, you can change the Recipient's name and address to something other than the site default:

null

To customize recipients:

The "Addressing" tab allows you to customize other recipients of the email sent after submission. By default, no email is sent to anyone other than the site owner.

To send a confirmation email to the user's address that was entered in the form, choose the field name used to capture the their address. In this example, the field name was titled "Your E-Mail Address" on the form displayed to the user. You may also enter additional email addresses for CC and BCC Recipients:

PFG Mailer 3

What if you'd like the user to determine who receives the email? For example, suppose you have a "Contact Us" form. It could have a field with multiple choices for the contact reason. The format for each item is "Display Text|email address".

Select 1

This will result in a drop-down on your form for the user to select from:

Select 2

When the form is submitted, the mailer will send it to the address selected by the user.

Configuring the Mailer Adapter : The Email Message

The default email message sent simply lists the data submitted:

Default Email

To customize the contents of the email, select the "Message" tab. From here, you can modify the

  • Subject
  • Body prologue and epilogue (surrounding the data entered)
  • Email Signature
  • Which data fields to include and whether or not to list empty fields

The "Template" tab allows you to edit the HTML that renders the message being sent and allows you to create dynamic text.

For example, if you wanted to open the email with a salutation which included the person's name from the data entered, you could add the following to fill in the fields dynamically (assuming the shortnames of the fields on the form were first-name and last-name):

Dear <tal:block tal:content="python:request.form.get('first-name') python:request.form.get('last-name')"/>,

 

You can customize so much more with PloneFormGen

This is only the tip of the iceberg when it comes to the many ways you can easily customize forms on your site. In future articles, we'll cover:

  • How to create custom data validation
  • Styling your form with CSS
  • Saving the data to a file or database
  • Sending data to Salesforce
  • How to group form fields together and create multi-page forms
  • Creating content items within the site from the data submitted
  • And more!

Was this article useful? Let us know in the comments and be sure to sign up for our Plone & Python How-To digests to receive more how-to guides as soon as they are published!

Asko Soukka: Transmogrifier, the Python migration pipeline, also for Python 3

by Asko Soukka at 2014-11-20T13:06:00Z

TL;DR; I forked collective.transmogrifier into just transmogrifier(not yet released) to make its core usable without Plone dependencies, use Chameleon for TAL-expressions, installable with just pip install and compatible with Python 3.

Transmogrifier is one of the many great developer tools by the Plonecommunity. It's a generic pipeline tool for data manipulation, configurable with plain text INI-files, while new re-usable pipeline section blueprints can be implemented and packaged in Python. It could be used to process any number of things, but historically it's been mainly developed and used as a pluggable way to import legacy content into Plone.

A simple transmogrifier pipeline for dumping news from Slashdot to a CSV file could look like:


[transmogrifier]
pipeline =
from_rss
to_csv


[from_rss]
blueprint = transmogrifier.from_expression
modules = feedparser
expression = python:modules['feedparser'].parse(options['url']).get('entries', [])
url = http://rss.slashdot.org/slashdot/slashdot

[to_csv]
blueprint = transmogrifier.to_csv
fieldnames =
title
link

filename = slashdot.csv

Actually, in time of writing this, I've yet to do any Plone migrations using transmogrifier. But when we recently had a reasonable size non-Plone migration task, I knew not to re-invent the wheel, but to transmogrify it. And we succeeded. Transmogrifier pipeline helped us to design the migration better, and splitting data processing into multiple pipeline sections helped us to delegate the work between multiple developers.

Unfortunately, currently collective.transmogrifier has unnecessary dependencies on CMFCore, is not installable without long known good set of versions and is missing any built-int command-line interface. At first, I tried to do all the necessary refactoring inside collective.transmogrifier, but eventually a fork was required to make the transmogrifier core usable outside Plone-environments, be compatible with Python 3 and to not break any existing workflows depending on the old transmogrifier.

So, meet the new transmogrifier:

  • can be installed with pip install (although, not yet released at PyPI)
  • new mr.migrator inspired command-line interface (see transmogrif --help for all the options)
  • new base classes for custom blueprints
    • transmogrifier.blueprints.Blueprint
    • transmogrifier.blueprints.ConditionalBlueprint
  • new ZCML-directives for registering blueprints and re-usable pipelines
    • <transmogrifier:blueprint component="" name="" />
    • <transmogrifier:pipeline id="" name="" description="" configuration="" />
  • uses Chameleon for TAL-expressions (e.g. in ConditionalBlueprint)
  • has only a few generic built-in blueprints
  • supports z3c.autoinclude for package transmogrifier
  • fully backwards compatible with blueprints for collective.transmogrifier
  • runs with Python >= 2.6, including Python 3+

There's still much work to do before a real release (e.g. documenting and testing the new CLI-script and new built-in blueprints), but let's still see how it works already...

P.S. Please, use a clean Python virtualenv for these examples.

Example pipeline

Let's start with an easy installation


$ pip install git+https://github.com/datakurre/transmogrifier
$ transmogrify --help
Usage: transmogrify <pipelines_and_overrides>...
[--overrides=<path/to/pipeline/overrides.cfg>]
[--context=<path.to.context.factory>]
transmogrify --list
transmogrify --show=<pipeline>

and with example filesystem pipeline.cfg


[transmogrifier]
pipeline =
from_rss
to_csv


[from_rss]
blueprint = transmogrifier.from_expression
modules = feedparser
expression = python:modules['feedparser'].parse(options['url']).get('entries', [])
url = http://rss.slashdot.org/slashdot/slashdot

[to_csv]
blueprint = transmogrifier.to_csv
fieldnames =
title
link

filename = slashdot.csv

and its dependencies


$ pip install feedparser

and the results


$ transmogrify pipeline.cfg
INFO:transmogrifier:CSVConstructor:to_csv wrote 25 items to /.../slashdot.csv

using, for example, Python 2.7 or Python 3.4.

Example migration project

Let's create a migration project with custom blueprints using Python 3.

In addition to transmogrifier, we need also z3c.autoinclude (patched for Python 3) and venusianconfiguration for easy blueprint registration:


$ pip install git+https://github.com/datakurre/transmogrifier
$ pip install git+https://github.com/datakurre/venusianconfiguration
$ pip install git+https://github.com/datakurre/z3c.autoinclude

Next, our working directory must contain a simple setup.py to declare a package for our custom blueprints:


from setuptools import setup, find_packages
setup(
name='blueprints',
packages=find_packages(exclude=['ez_setup']),
install_requires=[
'setuptools',
'transmogrifier',
'venusianconfiguration',
'fake-factory'
],
entry_points="""
# -*- Entry points: -*-
[z3c.autoinclude.plugin]
target = transmogrifier
"""

)

Then, we must create a sub-folder for our Python modules


$ mkdir blueprints
$ touch blueprints/__init__.py
$ touch blueprints/configure.py

and register our package for our Python (virtualenv):


$ python setup.py develop

Finally, we can implement custom blueprints in blueprints/configure.py


from venusianconfiguration import configure

from transmogrifier.blueprints import Blueprint
from faker import Faker


@configure.transmogrifier.blueprint.component(name='faker_contacts')
class FakerContacts(Blueprint):
def __iter__(self):
for item in self.previous:
yield item

amount = int(self.options.get('amount', '0'))
fake = Faker()

for i in range(amount):
yield {
'name': fake.name(),
'address': fake.address()
}

and see them registered next to the buit-in ones (or from the other packages hooking into transmogrifier autoinclude entry-point):


$ transmogrify --list

Available blueprints
--------------------
faker_contacts
...

Now, we can make an example pipeline.cfg


[transmogrifier]
pipeline =
from_faker
to_csv


[from_faker]
blueprint = faker_contacts
amount = 2

[to_csv]
blueprint = transmogrifier.to_csv

and enjoy the results


$ transmogrify pipeline.cfg to_csv:filename=-
address,name
"534 Hintz Inlet Apt. 804
Schneiderchester, MI 55300"
,Dr. Garland Wyman
"44608 Volkman Islands
Maryleefurt, AK 42163"
,Mrs. Franc Price DVM
INFO:transmogrifier:CSVConstructor:to_csv saved 2 items to -

But even easier would be to just use the shipped mr.bob-template...

Migration project using the template

The new transmogrifier ships with an easy getting started template for your custom migration project. To use the template, you need a Python environment with mr.bob and the new transmogrifier:


$ pip install mr.bob readline # readline is an implicit mr.bob dependency
$ pip install git+https://github.com/datakurre/transmogrifier

Then you can create a new project directory with:


$ mrbob bobtemplates.transmogrifier:project

Once the new project directory is created, inside the directory, you can install rest of the depdendencies and activate the project with:


$ pip install -r requirements.txt
$ python setup.py develop

Now transmogrify knows your project's custom blueprints and pipelines:


$ transmogrify --list

Available blueprints
--------------------
custom.mock_contacts
...

Available pipelines
-------------------
example
Example: Generates uppercase mock addresses

And the example pipeline can be executed with:


$ transmogrify example
name,address
ISSAC KOSS I,"PSC 8465, BOX 1625
APO AE 97751"

TESS FAHEY,"PSC 7387, BOX 3736
APO AP 13098-6260"

INFO:transmogrifier:CSVConstructor:to_csv wrote 2 items to -

Please, see created README.rst for how to edit the example blueprints and pipelines and create more.

Mandatory example with Plone

Using the new transmogrifier with Plone should be as simply as adding it into your buildout.cfg next to the old transmogrifier packages:


[buildout]
extends = http://dist.plone.org/release/4.3-latest/versions.cfg
parts = instance
versions = versions

extensions = mr.developer
soures = sources
auto-checkout = *

[sources]
transmogrifier = git https://github.com/datakurre/transmogrifier

[instance]
recipe = plone.recipe.zope2instance
eggs =
Plone
z3c.pt
transmogrifier
collective.transmogrifier
plone.app.transmogrifier

user = admin:admin
zcml = plone.app.transmogrifier

[versions]
setuptools =
zc.buildout =

Let's also write a fictional migration pipeline, which would create Plone content from Slashdot RSS-feed:


[transmogrifier]
pipeline =
from_rss
id
fields
folders
create
update
commit


[from_rss]
blueprint = transmogrifier.from_expression
modules = feedparser
expression = python:modules['feedparser'].parse(options['url']).get('entries', [])
url = http://rss.slashdot.org/Slashdot/slashdot

[id]
blueprint = transmogrifier.expression
modules = uuid
id = python:str(modules['uuid'].uuid4())

[fields]
blueprint = transmogrifier.expression
portal_type = string:Document
text = path:item/summary
_path = string:slashdot/${item['id']}

[folders]
blueprint = collective.transmogrifier.sections.folders

[create]
blueprint = collective.transmogrifier.sections.constructor

[update]
blueprint = plone.app.transmogrifier.atschemaupdater

[commit]
blueprint = transmogrifier.from_expression
modules = transaction
expression = python:modules['transaction'].commit()

Now, the new CLI-script can be used together with bin/instance -Ositeid runprovided by plone.recipe.zope2instance so that transmogrifier will get your site as its context simply by calling zope.component.hooks.getSite:


$ bin/instance -OPlone run bin/transmogrify pipeline.cfg --context=zope.component.hooks.getSite

With Plone you should, of course, still use Python 2.7.

November 19, 2014

UW Oshkosh How-To's: Email address autocompletion

by nguyen at 2014-11-19T21:40:12Z

In our specific use case, we needed to have the standard 'replyto' field in a PloneFormGen form autocomplete based on the last name of a person being typed.

https://www.uwosh.edu/intranet/Members/kimadmin/kims-test-form/view

You'll have to log in.

The email address ("replyto") field defaults to your email address but if you select it all, delete, and start typing, it'll autocomplete with campus email addresses.

I put into our svn repo:

email_address_autocomplete.js 

which I registered in the Intranet's portal_javascripts so it's loaded only for authenticated users; it attaches itself to any field named 'replyto'

getMatchingEmailWS.py

which is an external method I added to the Intranet's portal_skins/custom folder; it calls the web service on ws.it.uwosh.edu that returns at most 100 emails that match on last name the string provided (minimum length: two characters) This means that on the Intranet any PFG form that includes the default email address field will have autocomplete. :)

 

UW Oshkosh How-To's: Using email address autocomplete to set a name field, and vice versa

by nguyen at 2014-11-19T21:39:44Z

Caveat: this information is specific to UW Oshkosh's email address autocompletion mechanism!

This is an extension to the how-to entitled Email address autocompletion in which we explained how we set up autocompletion of just an email address field.

This how to explains how you can tie two fields together: an email address field and a full name field.  Thanks to our David Hietpas, our resident JavaScript wiz. :)

Assume you've created a new PloneFormGen form. By default, it contains an email address field ("replyto"), a subject field ("topic"), and a comments field.  

We are going to enable autocompletion on the email address field so that when you start typing a name, you see a list of matching choices, and when you choose one, the email address gets put into the email address field and the matching person's name gets put into the subject field.

In the ZMI -> portal_skins -> custom, add a new File object called email_address_autocompletion.js, and put the following into it:

$(function() {
    var cache = {};
    $( "#replyto" ).autocomplete({
      minLength: 2,
      select: function (e, ui) {
        $("#topic").val(ui.item.fullname)
      },
      source: function( request, response ) {
        var term = request.term;
        $.getJSON( "getMatchingEmailWS", { match: request.term, type: 'json' }, function( data ) {
            response( $.map( data, function( item ) {
              return {
                label: item.f + " " + item.m + " " + item.l + " <" + item.e + ">",
                value: item.e,
                fullname: item.f + " " + item.m + " " + item.l,
              }
            }));
        });
      }
    });
  });
$(function() {
    var cache = {};
    $( "#topic" ).autocomplete({
      minLength: 2,
      select: function (e, ui) {
        $("#replyto").val(ui.item.email)
      },
      source: function( request, response ) {
        var term = request.term;
        $.getJSON( "getMatchingEmailWS", { match: request.term, type: 'json' }, function( data ) {
            response( $.map( data, function( item ) {
              return {
                label: item.f + " " + item.m + " " + item.l + " <" + item.e + ">",
                value: item.f + " " + item.m + " " + item.l,
                email: item.e,
              }
            }));
        });
      }
    });
  });

Still in the ZMI -> portal_skins -> custom folder, create an External Method with ID, module, and function name "getMatchingEmailWS".  (You will have had to put a file getMatchingEmailWS.py into the Extensions folder on the file system).

In the ZMI -> portal_javascripts, scroll to the bottom and add a new registration with ID "email_address_autocompletion.js".

Reload the PloneFormGen form you just created, so that the new JavaScript loads and takes effect.

Now when you start typing a last name in either the email address or subject field and you pick one of the choices, both fields get filled appropriately.

eGenix: eGenix mxODBC Connect 2.1.1 GA

2014-11-19T08:00:00Z

Introduction

The mxODBC Connect Database Interface for Python allows users to easily connect Python applications to all major databases on the market today in a highly portable, convenient and secure way.

Python Database Connectivity the Easy Way

Unlike our mxODBC Python extension, mxODBC Connect is designed as client-server application, so you no longer need to find production quality ODBC drivers for all the platforms you target with your Python application.

Instead you use an easy to install royalty-free Python client library which connects directly to the mxODBC Connect database server over the network.

This makes mxODBC Connect a great basis for writing cross-platform multi-tier database applications and utilities in Python, especially if you run applications that need to communicate with databases such as MS SQL Server and MS Access, Oracle Database, IBM DB2 and Informix, Sybase ASE and Sybase Anywhere, MySQL, PostgreSQL, SAP MaxDB and many more, that run on Windows or Linux machines.

Ideal for Database Driven Client Applications

By removing the need to install and configure ODBC drivers on the client side and dealing with complicated network setups for each set of drivers, mxODBC Connect greatly simplifies deployment of database driven client applications, while at the same time making the network communication between client and database server more efficient and more secure.

For more information, please have a look at the mxODBC Connect product page, in particular, the full list of available features:

    >>> eGenix mxODBC Connect Product Page

News

mxODBC Connect 2.1.0 is a patch level release of our successful mxODBC Connect product. We have put great emphasis on enhancing the TLS/SSL setup of the mxODBC Connect product, addressing recent attacks on SSLv3 and improving the security defaults.

Security Enhancements

  • Updated included eGenix pyOpenSSL to 0.13.6, which includes OpenSSL 1.0.1j and enables the TLS_FALLBACK_SCSV protection against protocol downgrade attacks.
  • OpenSSL cipher string list updated to use the best available ciphers in OpenSSL 1.0.1j per default and support perfect forward security.
  • OpenSSL context options setup to disallow weak protocol features.
  • Disabled SSLv3 for the mxODBC Connect Client in response to the recent POODLE attack on SSLv3.

    mxODBC Connect Client 2.1.1 will not be able to communicate with mxODBC Connect Server 2.1.0 and earlier when using SSL mode. The error message looks like this: [Error] [('SSL routines', 'SSL23_GET_SERVER_HELLO', 'unsupported protocol')] (using pyOpenSSL) or [SSLError] [Errno 1] _ssl.c:493: error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number (using the ssl module).
  • Enabled TLS v1, v1.1 and v1.2 for the mxODBC Connect Server in SSL mode and have it use the best possible protocol when talking to a client.

    The server will still support SSLv3 for backwards compatibility reasons, since older mxODBC Connect Clients only support SSLv3. This will be changed in the next major/minor mxODBC Connect Server release.
  • Fixed a linker setting on Linux to have the mxODBC Connect Server use the embedded OpenSSL libraries instead of the system ones.
  • Improved the protocol handlers for SSL connection setups using mixed plain text/TLS connections to renew the session id after having established the TLS session.

mxODBC Connect Enhancements

  • Fixed a problem where connection/cursor.messages could not be accessed from the client side.
  • mxODBC Connect Client is now also available as web installer, greatly simplifying the installation of the client. It is now possible to install the client using a single pip command:
    pip install egenix-mx-base egenix-mxodbc-connect-client egenix-pyopenssl
  • Upgraded eGenix PyRun used for mxODBC Connect Server on Linux to 2.0.1.
  • Upgraded the Python version used for mxODBC Connect Server on Windows to 2.7.8.

Asynchronous Processing

  • Fixed a problem which prevented the mxODBC Connect Client to connect to the server when using both gevent integration and the Python ssl module for communication.

mxODBC API Enhancements

  • Upgraded the mxODBC Connect Server to mxODBC 3.3.1.

SQL Server

  • Documented a solution for a problem with the SQL Server 2012 parser complaining about not being able to deduce types of some operations using more than one bound variable, e.g. "col1 >= ? + ?".

Teradata

  • Improved the Teradata ODBC driver setup instructions to address some common gotchas when setting up mxODBC to work with these drivers.
  • Fixed a problem with Teradata and the test suite which resulted in an error "[Teradata][ODBC Teradata Driver] Beyond SQL_ACTIVE_STATEMENTS limit". The driver needs an explicit call to cursor.flush() to close any open result sets before running commits or rollbacks.

Misc

  • Fixed a problem in cursor.getcolattributes() that caused errors to be ignored.
  • Added better protection against ODBC driver bugs in getenvattr().
  • Fixed an attribute error when using the NamespaceRowFactory function.
  • Fixed a deprecation warning when using the NamespaceRowFactory function.

For the full set of changes, including those of the 2.1 series of mxODBC Connect, please check the mxODBC Connect change log.

Upgrading

You are encouraged to upgrade to this latest mxODBC Connect release. When upgrading, please always upgrade both the server and the client installations to the same version - even for patch level releases.

We will give out 20% discount coupons for upgrade purchases going from mxODBC Connect Server 1.x to 2.1 and 50% coupons for upgrades from mxODBC 2.x to 2.1. Please contact the eGenix.com Sales Team with your existing license serials for details.

Users of our stand-alone mxODBC product will have to purchase new licenses from our online shop in order to use mxODBC Connect.

You can request free 30-day evaluation licenses via our web-site or writing to sales@egenix.com, stating your name (or the name of the company) and the number of eval licenses that you need.

Downloads

Please visit the eGenix mxODBC Connect product page for downloads, instructions on installation and documentation of the client and the server package.

If you want to try the package, please jump straight to the download instructions.

Fully functional evaluation licenses for the mxODBC Connect Server are available free of charge.

mxODBC Connect Client is always free of charge.

Support

Commercial support for this product is available directly from eGenix.com.

Please see the support section of our website for details.

More Information

For more information on the eGenix mxODBC Connect, licensing and download instructions, please write to sales@egenix.com.

Enjoy !

Marc-Andre Lemburg, eGenix.com

November 16, 2014

Maurits van Rees: Keynote Michael Johnson: Kickstarting the Personal Space Age

by Maurits van Rees at 2014-11-16T17:17:00Z

I am the founder of http://pocketspacecraft.com

I am a physicist. Working in meteorology and ocean ventilation models and tools. Voyager and ROSAT analysis. Att Imperial College. Software developer, computer scientist. Aerospace. Co created the KickSat project.

Founded in 2010, operations in China, Isle of Man, UK, US.

Space is bug and relatively unexplored. Missions are expensive and infrequent, funding is tight, missions are risk averse, it is scary to try really new ideas, processors are often ten years old because we know all the bugs there.

Goal: before I retire (say in the next 25 years) I want to send a spacecraft to orbit and/or land on the surface of 'every' body in the solar system. Every is too much, but maybe one million objects?

We need a pocket space craft, that individuals can buy. Personal space age: everyone can get involved.

We want to explore. Consumer projects: how are you going to support that, on that scale? We want instant gratification for people, but it normally takes years to get something up in the air. Work together.

Help different communities involved. If you do something good for the science community, your project may be able to get on board a mission to Mars. Lots of legal hurdles, insurance. Small space crafts, but big problems.

Video games are a good metric for this. People are prepared to pay 50 dollars for that. Lots of challenges there.

What helps, is open source. An Open Source Space System is being worked on. This is done by individuals in their spare time, but also by big names in space. Bring the International CubeSat Consortium in the mix.

Using open source actually helps against some of the legal hurdles, making you exempt from them.

The CubeSat standard defines a standard space craft. One unit is en by ten by ten centimeter, using 1 Watt, 1 kilogram. You can combine a few. About 50k dollar to launch one. That was started ten years ago. About 70 made in that time, and about 70 in the last six months. So it is being picked up. Much much more planned for next year. Launched from the international space station.

How is it possible? Moore's law. A KickSat, much smaller than CubeSat, already has more computer power than Voyager had, so don't knock it.

Standard 3 unit CubeSat launch is available today. Not tied to any launch vehicle or nation. We need to get it close to where we want to get (moon, planet). Then we deploy and do whatever we want (except introducing bacteria, so there are rules).

Pocket Spacecraft prototype, 32-96 millimeter diameter, less than 50 micrometer thick, 10-100 milligram, 5-100 MIPS, up to 100 GB storage, optical communications. Can be manually produced.

Long term goal: print space craft in space. Design your space craft, send the instructions in the direction of the printer in orbit around Mars, wait 20 to 40 minutes depending on time of year, and you can launch your space craft.

There is now Open Mission Control software, and Pocket Mission Control for your Android.

You need to be able to talk to these space craft, via ground stations. We can do that amateur radio based. See http://mygroundstations.com

But with a credit card and some convincing you can rent NASA communications by the hour and use the same stuff that still talks to Voyager, at 80 light hours distance.

But there also is the LOFAR network, for radio astronomy, which you could use to pinpoint your space craft. Lots of data, several Peta byte for 15 minutes. Expensive, but prices will drop: Moore's law again. There is an awful lot to do, but that is fine.

Where do you want to explore today?

Standards and info:

Spacecraft parts:

Questions?

50 percent of CubeSats fail. But 50 percent of large space projects fail as well. Interesting, isn't it?

Space junk? We are responsible about that, making sure we do no harm, otherwise we may no longer be allowed in space.

Watch the video of this talk.

Maurits van Rees: Calvin Hendryx-Parker: Hands on with Multisite Management using Lineage

by Maurits van Rees at 2014-11-16T17:16:14Z

I am CTO at Six Feet Up. Doing Plone since 2003. Lineage was one of my first contributions.

Lineage is an add-on for Plone. Enabled by the pieces that are built into Plone, like INavigationRoot, IPossibleSite, together as IChildSite. So we are using the ZCA (Zope Component Architecture).

With Lineage you create one site and lots of sub sites for departments. Advantages compared to having lots of Plone sites:

  • You only need to upgrade Plone once.
  • You can share content among the sites. You can link to it in the TinyMCE editor.
  • Performance is better: you get a smaller footprint for all sites combined.

Disadvantages:

  • Want to split them up into different Plone sites? That is a bit more difficult.
  • Add-ons and configuration are global, not per sub site, so your sites need to be really alike.

There are various add-ons/extensions that build on Lineage, for example lineage.themeselection for giving sub sites a different theme.

Add collective.lineage to the eggs in your buildout. Activate it in the add-ons control panel. Go to a folder, to the actions drop down and enable this folder as a sub site. Within this folder, the home will be this folder. You can add sub sites within sub sites if you want. You can exclude a sub site from navigation if you want to.

I want to show you some cool tricks you can do.

You can use custom site types. As long as they are IFolderish types, you can enable a sub site on it. At Penn State we used that to ease creating a new course.

Some add-ons:

  • lineage.registry, for storing some config per site
  • lineage.index: adds a separate index to the catalog per site.
  • Resonate is new and does moving and syndicating to sub sites.

With Resonate it is possible to move content between sub sites, via the workflow, giving control to the content managers of the sub sites.

With Resonate you can syndicate content to various sub sites, still using standard Plone workflow machinery. A teaser of a news item can then be shown on the other sub sites, where they point to the original item, so that one remains the canonical one, which is good for SEO. The editor can see which sub sites accepted and rejected the share.

How does it perform? I set up two Zope instances, one with 100 Plone Sites, one with 1 Plone Site and 100 sub sites. Advantage of Lineage, with blank sites: about 9 MB against about 130 MB. Startup memory did not differ much, slightly better for Lineage. (Audience: may be influenced by the ZODB cache size too.)

Lineage is actually Plone 5 compatible. Version 2.0 released last night.

Code: https://pypi.python.org/pypi/collective.lineage

Example usage: http://www.la.psu.edu

For migration from the old separate sites to sub sites we used transmogrifier.

Watch the video of this talk.

by Maurits van Rees at 2014-11-16T17:15:22Z

This is about Plone's new front-end library. Mockup is a framework for adding Javascript functionality from other libraries to Plone. Will be part of Plone 5.

For example the 'moment' pattern:

<span class="pat-moment">some date</span>

That will output a nicer date using the client browser language.

Currently, on Plone 4.3, we are still developing js as if it were 2004. 41 Javascripts registered in a default site. The resource registries have a packer functionality, with last commit from 2009. Maybe not so good anymore. No tests.

History of mockup. Based on Patternslib, created by Cornelis Kolbach and Wichert Akkerman and others in 2011. Forked by Rok Garbas in 2012, split up in Mockup and Mockup core in December 2013. plone.widgets and plone.app.widgets by Nathan van Gheem in November 2011, an attempt to bring more modern UI to Plone.

Build environment:

  • Yo: used to generate base skeleton
  • Grunt: comparable to buildout for js
  • NPM: node package manager
  • Front end: RequireJS.
  • Testing: Karma, Mocha, ExpectJS/CHAI

[Showing some code.]

We made a generator for mockup:

npm install -g generator-plonemockup
yo plonemockup

Then you answer a few questions and you get a base pattern.

Now do some coding and call make and it will minify, create a bundle, etc.

There is a training that you can follow online:

http://mockup-training.readthedocs.org

Alternatives:

  • Patternslib is still being developed and is of course similar to Mockup due to the shared history.
  • Web components, w3c draft. Some experimental projects, Google Polymer, Mozilla X-Tags. Limited browser compatibility. Status: html templates is completed, shadow dom and html imports are in draft.
  • AngularJS directives are similar to Mockup. Angular is full stack framework. It will switch to Web Components once it is ready. Hard to migrate Mockup to using AngularJS of course.

Our opinion about Mockup: great framework, big improvement to current situation, nice workflow, but uncertainty about future, maybe Angular could be better. Open space tomorrow at 11:00.

Audience: mockup (or patternslib) is an abstraction layer, Angular is something really different. Don't use Angular as base for the Plone JS. We have to get the whole community into thinking 'Javascriptish', really push it into the head of more developers. It is actually pretty easy to change a Mockup pattern into a Patternslib pattern or the other way around. Changing a date picker pattern into a different one? Not so difficult either. Not enough people are currently feeling secure developing with Mockup, but that will come.

Watch the video of this talk.

by Maurits van Rees at 2014-11-16T17:14:40Z

In some cases there are many sites that are almost the same, for example universities using Plone EDU. Different look and feel, different content, same base.

At Simples Consultoria we have been successful in dealing with this market.

Marketing Plone is very hard. It can do so much, is so flexible, how are you going to market that to a specific audience.

Marketing a solution is easier. If a client wants an intranet, I am not going to tell him that we made a portal for the Brazilian government.

Customizing Plone should not be hard. Distributions should be first class citizens. Offer a download specifically for educations or for an intranet on plone.org. The code we use to make the standard installers, VMs, packages, should be there for distributions too.

We need documentation, tests. We would love to have help with Jenkins and stuff.

Talk the customer's language, know their needs. For example PloneGov and local initiatives.

Something like the Plone Intranet Consortium is the very best thing that happened to Plone in a long time. We need to work like this. Companies should act together. Plone and Intranet are a perfect match. Bring new people to Plone. Companies will save Plone. We love Plone, but Plone needs customers and companies.

Watch the video of this talk.

by Maurits van Rees at 2014-11-16T17:13:39Z

Why should you want to create or run a Plone product on Substance D? Because it is fun. It might be a good experience for the future of Plone.

Substance D has all the good things from Pyramid. Plus stores data in a ZODB. It is a CMS.

Rapido is the next Plomino version. Plomino started in 2006, still based on Archetypes, stores data into CMF objects. Uses extensively ZCatalog and PythonScript.

I turned it into Rapido. Plone 5. Based on dexterity.

  • rapido.core, totally independent of Plone.
  • Storage service: rapido.souper provides a storage service based on Souper. Souper works on Plone and Pyramid, so I chose it.
  • rapido.plone, standard dexterity content types, adapts them using rapido.core, ideally uses nothing but plone.api.
  • rapido.substanced, standard substanced.content classes, uses nothing but Substance D api.

[Demo of Plone part and Substance D part.]

So how is this done?

  • TTW scripting is what Rapido is about. I could not use PythonScript, but I used zope.untrustedpython.
  • Catalog: repozo.catalog is just fine, working on both systems.
  • Content persistence: souper, created by Blue Dynamics, designed to work on both pyramid and Plone.
  • settings persistence: using annotations. Very basic, but just works. Both contents can be IAttributeAnnotatable.
  • Forms and widgets. Substance D has Deform, but it is not rich enough. Porting z3c.form to Substance D... maybe not. So: client side rendering with Angular Schema Form.
  • Access control. Both systems have a granular ACL service. Probably possible to support them both transparently, but for now I created a custom security implementation.

My experience with Substance D. Pros:

  • Fun.
  • Happy to find all the good ingredients.
  • Fast testing.

Cons:

  • Not 100 percent ZCA ready, need to call config.hook_zca(), it works fine, no problem, I am just not comfortable with the 'hook' term here. Also, we would probably need a local registry.

Conclusions for me about Plone future:

  • ZCA plus buildout plus ZODB make our identity, and we must preserve it. It sets us apart, it is something strong.
  • We can find clever approaches to avoid a full rewrite. For example do more in plone.api instead of relying on what Zope does for us.
  • Can we easily migrate to Substance D? No.
  • Should we migrate to something else? No.

Watch the video and slides of this talk.

by Maurits van Rees at 2014-11-16T17:11:49Z

This talk is about the ENCODE portal, an encyclopedia of DNA elements.

I want to talk here about a pattern, not really about the specific technologies involved.

I work at the data coordination center, generating and organizing data. We get metadata submissions, store it in a metadata database. It is really a knowledge management system. It could have been built in Plone, but it would not be a great fit. I started from scratch based on Pyramid and ReactJS. Nowadays services more and more have a Javascript UI, where Javascript talks to the backend.

Embrace Javascript. There has been progressive enhancement. Single page web apps really need it. For building a portal, I have been looking at isomorphic web applications. Originally from the Render framework. It is important that pages are loading quickly. The exit rate for visitors goes just as quickly up with the loading time.

Json is the lowest common denominator for data. xml is more flexible, but more complex. In Python it is much easier to use json.

JSON-LD: json link data. Adopted recently by the w3c. It is partly about semantic data, but we are not using that yet.

At first we needed to duplicate the routing information on the server and the client side. JSON-LD allows us to express type information, which avoids the duplication.

You can have JSON-LD like this [in "pseudo json" here, just for the idea, Maurits]:

{
  @context: context/jsonld
  @id: ...
  @type: [biosample, item]
  @submitted_by: {
    @id: /users/lrowe
    @type: [user, item]
    name: Laurence Rowe
    }
}

Defined using JSON Schema, which is an IETF draft. Schema version, field definitions, types, validators. It is an extensible format.

All our normalized data is stored in Postgres as json. JSON-LD has the concept of framing, which objects should be embedded (mapping to a view in Plone terms), possibly doing some elasticsearch. Above it is server rendering (with NodeJs) that is creating html, and sends it to the browser. After the first page load, the browser uses Javascript to talk the the JSON-LD backend directly, instead of via the NodeJS server, letting the ReactJS Javascript do its own rendering.

The browser opens a page. Gets html back. Then it does a query for json data and gets that back and shows it on the page.

Indexing linked data. Updating one item may need a reindex of other items that link to it. We have written code for that. Using elasticsearch as a cache.

Come work with us, we are hiring.

See code at https://github.com/ENCODE-DCC

Watch the video and slides of this talk.

by Maurits van Rees at 2014-11-16T17:10:22Z

This is a big picture presentation. But don't worry, there will be a demo at the end.

United we stand, divided we fall. Plone is in a rough spot. Let's design and build some solutions. Yes, Wordpress is eating our lunch and our cake, but we should not sit back.

Code: https://github.com/ploneintranet/ploneintranet.suite

I have my own company, Cosent.

The Plone community is good. The backend code is good. The user interface was good, but has fallen back. Good enough is not good enough anymore. We need to do better. Only excellent user experience will win us customers.

When as a Plone company you want to sell Plone as an Intranet, you have a good starting point, but you are missing lots of pieces. You would have to build that yourself and let one customer pay for it. That is not going to happen, or at least it is a hard sale.

In the Plone community, the number of commits per month is rising, and the number of committers per month is also rising. Doing pretty well.

So what is wrong? We need to evolve or we will die, like dinosaurs. Plone Core currently is Web 1.0. We need a Plone Social, web 2.0: read/write, social networking, activity stream, time centric, personal perspectives, bottom-up sharing.

Roadmap: http://cosent.nl/roadmap

In 2010 I had the choice to ditch Plone or fix Plone. I chose to put all my eggs in one basket: Plone. Is Plone Social good enough? No. Look at Jive, that is some serious competition. We do not need to beat such a SAAS product, but we need to be close. Then customers will still prefer the local company that is closer to them and more attuned to their needs.

I think the "spare time" model of development is broken. It has brought us this far, but it is not enough anymore. Stuff is nearly finished on sprints, and then lags too long.

We need a new model. We have the talent, the technology. We can do it. We need to invest in a high quality, out-of-the-box product baseline. Low purchase cost, immediate sales appeal, fast delivery, shared maintenance across the community, new community ethos in collaborating together.

As Plone Intranet Consortium we band together and want to work on this. We had a meeting after the Plone conference last year. Every company there had their own workspace solution, everyone was maintaining their own stack. But they did not have enough resources to generalize it for the community.

Design first. Design is not about eye candy. You start with a decent vision of what your strategy is, what your project is trying to solve.

Roadmap-driven Scrum development. Normal working day, in company time. Legitimate leadership serves the community. The consortium board funds the roadmap. Investment per company: 1000 euro per month, plus one developer day per week. Cash is used to hire people to help with the design. Sprint every Wednesday.

It is 100 percent open source. It is not a product that we will make money on. We will make money on the service we deliver. We want to move the license to the Plone Foundation, we will talk about that.

What we are developing, are add-ons, not a core fork. Plone 5 compatible. Will port to mockup. You are welcome to join the consortium.

Cornelis has made a patternslib-based clickable prototype, that needs no backend to operate.

Demo by Alexander Pilz.

User experience sells. We showed this demo to a client last week and he thought it was an impressive preview of social functions in future Plone.

Roadmap. Phase one: activity streams, team spaces, dashboards, document structures/wiki. Phase two: calendaring, search, news hub.

We are pioneering a new business model for open source.

  1. Dream a vision.
  2. Combine investment.
  3. Design first! Use dedicated designers.
  4. Develop and release.
  5. (or really 3.1 already) Win customers.

We can boldly go where no one has gone before. We are Plone, we can do anything.

We have an open space tomorrow. Welcome! Sprint on Saturday and Sunday.

Code: https://github.com/ploneintranet/ploneintranet.suite

Watch the video and slides of this talk.

by Maurits van Rees at 2014-11-16T17:09:08Z

I am owner of Klein & Partner, member of Blue Alliance, Austria. Doing Plone since version 1.0.

Default Plone is not so fast. Scales great horizontally (so adding machines), but there are still bottlenecks, primarily loading stuff from zodb.

First customer Noeku, over 30 Plone sites, hi availability, low to mid budget, self hosted on hardware, VMs. The pipeline is: nginx, varnish, pound, several Plone instances, databases (zodb, mysql, samba).

Second customer Zumtobel, brand specific international product portals, customer extranets, b2b e-shop, hosting on dedicated machines.

Third customer HTU Graz, one Plone site with several subsites (with lineage), lots of students looking at it, so we have a peak load.

Main Plone database is ZEO or PostgreSQL, plus blobstorage (NFS, NAS). Load balancer: haproxy or pound. Caching proxy: varnish (don't use squid please). Webserver: nginx (better not use apache).

The Plone instances and the client connection pool (between Plones and database) can use memcached (maybe multiple), LDAP, other third party services.

If you want to improve things, you must measure it: use munin, everywhere you can. fio is a simple but powerful tool to get measures on your io. Read up on how Linux manager disk/ram. Know your hardware and your VMs (if any).

Database level

  • Noeku: zeo server, blobstorage, both replicated with drdb
  • Zumtobel: RelStorage on PostgreSQL, blobs from NAS over NFS.
  • HTU Graz: RelStorage on PostgreSQL, all on one machine.

First things first: never store blobs in ZODB: use blobstorage. Standard Plone 4.3 images from news items are stored in zodb. You can change that. Check your code and add-ons.

ZEO server plus blobstorage: ensure a fast IO to harddisk or RAM, and have enough RAM for disk buffering.

Blobstorage on NFS/NAS: shared blobs and mount them on each node. Mount read-only on web server node and use collective.xsendfile (X-HTTP-Accel) to make it faster.

RelStorage plus blobstorage: never store blobs in the SQL database (same as zodb). No MySQL if you can avoid it. Configure your SQL database.

Connections pool: ZEO vs RelStorage. ZEO server pushes invalidations to client. RelStorage: ZEO client polls for invalidated objects. Disk cache of pickled objects per zope instance. On RelStorage size you can use memcached, which is a big advantage, reducing load on the database.

  • Noeku, ZEO.
  • Zumtobel: RelStorage, history free, 2 VMs, 16 instances plus some worker instances for asynchronous work, each 2 or 4 threads. RAM cache 30,000 or 100,000 objects, memcached as shared connection cache. If packing takes too long, try relstorage_packer.
  • HTU: RelStorage, history free. 6 instances, each 1 thread. RAM cache 30,000 objects. This is something you need to tweak, try out some values and measure the effect. Memcached. Poll interval 120 seconds. Blobstorage: shared folder on same machine.

The above is not specific for Plone. The below is.

Plone

  • Turn off debug mode, logging, deprecation warnings.
  • Configure plone.app.caching, even if you are not using varnish. Browsers cache things too and it can really help.
  • Multiple instances: use memcached instead of standard ram cache.
  • Know plone.memoize and use it.
  • Never calculate search twice. Check your Python and template code to avoid things that boil down to: if expensive_operation(): expensive_operation().
  • Use the catalog.
  • Do not overuse metadata: of you add too many metadata to the catalog brains, they may become bigger than the actual objects, slowing your site down.

Write conflicts:

  • 90% of write conflicts happens in the catalog.
  • To avoid it, try to reduce the time of the transaction. Hard in standard situations, but you may be able to first prepare some data and later commit it to the database.
  • Use collective.solr or collective.indexing. I hope that in Plone 6 we will no longer have our own catalog, but use SOLR.

Lots of objects, hundreds of thousands? Are catalog queries slow? Use a separate mount point for the portal_catalog, with higher cache sizes.

Archetypes versus Dexterity. In AT, you should avoid to wake up the object, ask the catalog instead. With Dexterity, it is sometimes cheaper to wake up the object: if objects are small and you iterate over a folder or subtree, or if adding lots of metadata to the catalog would be needed.

Third party services, like LDAP and other databases, need to be cached. Talking to external systems over the network is slow.

In case of serious trouble: measure! munin, fio, collective.traceview, Products.ZopeProfiler, haufe.requestmonitoring, Products.LongRequestLogger. Change one thing at a time and measure it. Really important!

plone.app.caching: always install it. For custom add-ons with own types and templates, you need extra configuration for each type and template. Do this! Calculate in some time for this task, it is some work, but it is well documented.

On high traffic, introduce a new caching rule for one/two/five minute caches, really helps against peak load.

Load balancer. Pound is stable but old and difficult to get measurements. Haproxy not that simple, newer, nice WebUI for stats. You should point the same request type to the same instance.

Web server: nginx. Set the proxy_* to recommended values.

Read more: on docs.plone.org.

Looking up attributes in Dexterity can be slow, we need to fix that.

Watch the video of this talk.

by Maurits van Rees at 2014-11-16T17:07:24Z

Steve McMahon: Plone installation, the future

What is the Installer team doing for Plone 5. You can still use your custom buildout, no problem, so your future may still vary.

Unified Installer will still work. Platform binary installers may not work; possibly Windows; nearly certainly not OS X, as it gets harder and harder to do open source development there.

The hot newness is Cloud or VM kits. We are working on making that happen for Plone too. Our principal tool is going to be Ansible. Build a full-stack server. Vagrant instance to test it before you deploy it. We are not scaring people away from other approaches.

Targets: from 5 to 5000 dollars per month, with configuration. Working on boxes for Digital Ocean, AWS, etc.

Plone 5: no longer ZopeSkel or templar, but mr.bob as tool for bootstrapping a new project. You are free to use the tool you like.

Paul Tax: Anniversary sprint

I am the new guy at Four Digits, Arnhem, The Netherlands. We are organizing a sprint on 22 till 26 June 2015. Hotels near, good wifi, standing desks if you want, agile planning, standups, food and drinks. Party on Friday, because it is our ten year anniversary. No problem if you only want to join for the party. Homebrewn beer.

Paul Roeland: Paragon

Paragon is the search for the very best in Plone add-ons. The hunt is on, especially for add-ons making the lives of integrators easier. No undead add-ons please, no unmaintained. Just good, useful add-ons. You can nominate until November 14. Are they maintained? Are they safe to put on a production site? Go to http://paragon.plone.org and nominate an add-on. Sorry, there are a lot of fields to fill in, but only a few are required. But the more the better, thank you for helping us. Results will be announced in December. We will feature those on the new plone.org.

Asko Soukka: venusianconfiguration

Venusian configuration: no more zcml. Put that configuration in a Python file. It can scan modules, a bit like grok, but explicitly. It will not be released, because I don't want all the hate mail. :-) But maybe incorporate it in Zope configuration.

See https://github.com/datakurre/collective.venusianconfig

Sally, Cris, Jens, Alexander, Andreas: bibliographies

Bibliographies are really important to a certain segment of users, academics specifically. There is funding to make some things better. This is a call for help. I scheduled an open space for this topic on Friday, please come and give your input. Create a plan of action.

Manabu Terada: Recent topics about Plone in Japan

PyCon Japan September 2014, 500 people, one English track. I conducted a session about Plone, about 100 people attended. Number of Python developers is increasing in Japan. We wish to spread Plone usage in Japan. Next year: Plone Symposium Tokyo! May/July 2015. Get interesting sessions, so speakers from overseas are very much invited. Airport, big city: English is no problem, outside may be difficult. Tokyo is very cheap. Safe? Yes, we even hold the Olympics in 2020. See you in Tokyo next year.

Maurizio Delmonte: Plone 5 at Sorrento

Abstract also exists for ten years. Join us at the Sorrento sprint next year. Don't wait until the end to save your room. It is somewhere in April 2015.

Here is a video of last year with people talking about Plone 5.

Bogdan Girman: dexterity base classes

Working at Der Freitag, Berlin. We have a dexterity content type Article. We needed to change the base class? One trick would be to delete the object and add it to the parent again, but that does not work with the event handlers that we also have. Main code to work around that:

parent._delOb(...)
parent._setOb(...)
obj.reindexObject(idxs=['is_folderish', 'object_provides'])

Eric Brehault: Why CMS won't die

All CMS are old. Wordpress is young at 2003.

We can make a no-CMS site quite easily. But a validation workflow and access rights would be nice. Newsletter. Document sharing, with some people. Multilingual. Next thing you know, your no-CMS crashes.

Face it, you need a CMS!

CMS are special. We are CMSistas, and only CMSistas know it. CMS are huge projects. For example: who needs buildout in Python, except us?

Wordpress is about 60 percent of all CMS websites. Huge! But 100 percent tomorrow? No way. I never saw Wordpress at any of my customers.

Nobody promises it will be easy. Is Plone too complex? Every CMS is complex.

Watch the videos of these lightning talks, part 1 and part 2.