Planet Plone

This is where developers and integrators write about Plone, and is your best source for news and developments from the community.

July 25, 2014

Bo Simonsen: Setting up traceview for Plone

by Bo Simonsen at 2014-07-25T21:36:05Z

As I described in my previous blog post traceview can run on a stand-alone Plone site, i.e. without no front-end webserver. In this blog post I will described the required steps to get started on using traceview for Plone.

Step 1 – login to traceview

The first step is of course to login to the traceview interface, if you don’t have an account, sign up at AppNeta’s homepage. Apparently, they got a free plan if you just have one application, i.e. one Zope instance . After you have logged in go to “Get started” and select “Install host agent”. Then you will see the appropriate command you should run on the server. This will install the daemon that sends data to AppNeta, and also various C libraries.

Step 2 – set up your Python environment

You need the oboe module to use collective.traceview. It can be installed either via pip/easy-install. I recommend to use a virtual environment if you choose this approach. It can also be installed via buildout. The oboe egg is on pypi so it should not be any problem just adding it to the egg list. Also, remember to add collective.traceview to the egg list.

Step 3 – setting up environment variables

You need a series of environment variables to inform Plone that it should do tracing and also how it should do the actual tracing. Ideally, you should set these environment variables via buildout since you may have different profiles, e.g. test, production, etc. You should have a buildout configuration similar to this:

[instance]
...
environment-vars +=
    TRACEVIEW_PLONE_TRACING 1
    TRACEVIEW_IGNORE_EXTENSIONS js;css;png;jpeg;jpg;gif;pjpeg;x-png;pdf
    TRACEVIEW_IGNORE_FOUR_OH_FOUR 1
    TRACEVIEW_SAMPLE_RATE 0.3
    TRACEVIEW_TRACING_MODE always

Most of these variables are quite obvious, and each of them is explained in the README of collective.traceview. Basically, what the configuration says is, that Plone should trace (TRACEVIEW_PLONE_TRACING 1) and there should be done no tracing for js, css, etc files (TRACEVIEW_IGNORE_EXTENSIONS js;css;png;jpeg;jpg;gif;pjpeg;x-png;pdf). Furthermore 404 pages should not be traced (TRACEVIEW_IGNORE_FOUR_OH_FOUR 1), and we trace just 3 out of 10 requests (TRACEVIEW_SAMPLE_RATE 0.3).

Just run buildout now, and new traces should occur in the “Default app” on the traceview interface. Just so you can see how an example trace looks like, I have attached a screenshot below (each layer can be viewed and further information is provided, for example, for catalog queries the entire query is logged).

screenshot-traceview

Happy tracing!

 

July 24, 2014

Bo Simonsen: Improved traceview support for Plone

by Bo Simonsen at 2014-07-24T21:07:38Z

Finally back for summer vacation, three weeks without a single line of code – how refreshing! It has been a long time since we did any work on collective.traceview but we have finally implemented a feature that has been wanted for long.

The current state of the module relies on a front-end web server (for example, apache or nginx) to kick the trace started. What happens if that the oboe module in apache will generate a unique trace id, referred to be the X-Trace, which will be sent to Plone and it will be used as reference to do the actual full-stack tracing. This is not always a good idea to do it in this way, consider the following scenarios:

  • You have no apache (or nginx), maybe just varnish in the front and it distributes the requests directly to the Plone instances, why wasting CPU power on running an additional web server.
  • You have apache, varnish and then Plone. You will get quite bogus traces for cached pages served directly by varnish, just showing apache.

For these scenarios it makes much more sense to start the tracing in the ZServer HTTP server. So we added an additional patch to the product patching the Medusa based implementation (I was not turned into stone for looking at it). This patch can start the tracing, giving us additional benefits than leaving out the front-end web server. It is now also possible to see the actual waiting time from the request hit ZServer to it is being served by the publisher. This may give you a hint if you need more ZServer threads or more Zope instances. For example, on the screenshot below, you can see a little waiting time around 9.00 AM between the ZServer HTTP server and the Zope publisher.

fivu_www_prod_Layer_Summary___TraceView

If you want to play with it just follow the instructions given in the README of collective.traceview to set the proper environment variables. These can preferably be set using buildout. The feature is provided by collective.traceview 1.3.

Please let me know if you have any feedback on the feature, or questions on how using it.

 

July 23, 2014

Wichert Akkerman: Lingua 2.4 released

2014-07-23T00:00:00Z

This is a bugfix release which fixes several problems reported by users.

Lingua is a Python package that helps you find translateable texts in your code, and generate POT-file for them. Think of it as xgettext on steroids for Python.

Read entire article.

July 18, 2014

Four Digits: Upgrading Plone 3.3 to Plone 4.3

by Kees Hink at 2014-07-18T09:10:00Z

Recently we upgraded a Plone 3.3 site to 4.3. Here are some thoughts regarding this.

Recently we moved a site from a long-time customer to Plone 4.3.

First off, the list of step to follow in http://plone.org/documentation/manual/upgrade-guide/version/ were quite exhaustive. There were some things that we did differently:

  • We went straight from Plone 3.3 to 4.3.3, skipping the intermediate versions (4.0, 4.1 and 4.2). For our site this worked.
  • After that, we ran all upgrade steps at once.
  • We had to manually remove some Javascripts (mostly Kupu) from the registry, in order for TinyMCE to work.

The sooner you upgrade to a more recent stable version, the smaller the hassle. Also, there's a performance improvement (see image), and some nice new features. See http://www.sixfeetup.com/blog/five-reasons-why-waiting-isnt-the-best-upgrade-strategy for more reasons to upgrade.

Wichert Akkerman: Get your REST on

2014-07-18T00:00:00Z

TL;DR: rest_toolkit is a new framework to create REST applications, build using Pyramid.

For two recent projects involved implementing a REST service. The experience of doing that led me to create a new simple but extendible toolkit for buidling REST applications. Want to see how simple? This is a full REST application:

@resource('/')
class Greeting(object):
    def __init__(self, request):
        pass
@Greeting.GET()
def show_root(root, request):
    return {'message': 'Hello, world'}
quick_serve()

Read entire article.

July 14, 2014

Connexions Developers Blog: OpenStax CNX Development Tools

by Ed Woodward at 2014-07-14T20:14:53Z

Over the last couple of years, we have changed our internal development tools. We do our own version of Agile development and have found these tools best meet our needs.
Over the last couple of years, we have changed our internal development tools. We do our own version of Agile development and have found these tools best meet our needs.

For Sprint planning, we use Trello.  Our team members are in many locations, so having a web-based tool to outline our Sprints has been very important. We create cards for User Stories or issues and work from the boards for Sprint planning and working on Sprints.


Our code is stored in Github. We previously ran our own SVN server, but slowly migrated all of our code to Github. It is a great tool. Our workflow for using Github is

  • Each component has a separate Repository.
  • Each Repository has a Master branch.
  • Each Repository has a production branch that contains the code currently in production.  This allows us to continue working and merging to Master, but also be able to fix problems in production easily from the production branch,
  • Developers branch off Master and code the Trello card they are working on. Once the code is completed and unit tested, the developer creates a Pull Request in Github. The Pull Request is to merge the code into Master.
  • A Pull Request triggers a code review by another team member.  Code reviews generally result in a review of the code as well as a manual test of the code.
  • Pull Requests are also unit tested using automated testing via Travis-CI
  • Once a Pull Request is approved, it is merged and the branch is deleted from Github.

Most of our meetings are held on Skype.  Skype has the simplicity of making a phone call and is mostly reliable.  We also use Google Hangouts when we need to share code or other screen sharing. It works really well, but if not as easy to start up as a Skype call.

Our team relies on IM. We have a Jabber server that some of the team uses and others use Google Talk or Hangouts.  IM is our virtual hallway and is a key part of our communication.

Six Feet Up: Plone and Drupal Coexistence in Higher Ed (PSM14 Recap)

by Calvin Hendryx-Parker at 2014-07-14T20:10:00Z

This is a recap of Calvin Parker's presentation at the Plone Symposium Midwest 2014

Content is King

Everyone in marketing has become obsessed with content marketing, and the demand for more people in more departments to have a way to create content has driven the creation of more websites.

Fast Forward

The problem with this rapid explosion of content across organizations is many websites have been rapidly created on different platforms with not central strategy.

Plone & Drupal have a 70% Coexistence in Higher Ed

Because of this we have seen that as recent as March of 2014 about 70% of every U.S. university that uses Plone also uses Drupal to some level.

How Do you Control Web Branding, Content & Infrastructure?

Consolidating is an option

Some organizations choose the obvious approach of trying to consolidate all of their websites onto a single platform.  This can work and we even have a solution for it called WebUnity,  but it can also be expensive and time consuming. You have to:

  • Evaluate CMSs and vendors
  • Migrate all your content and themes
  • Deal with Bit Rot
  • Train everyone on the new system

This can also be demotivating and polarizing when people refuse to change.

There is another option: Integration!

  • Keep all your content and websites in their existing CMS
  • Connect those sites so they can syndicate and track content

UCLA Case Study

Large University with a central IT department, but all of the content management is done independently by the various departments. Integration via a tool like PushHub allows them to have independent teams share content across sites and keep the content up to date as it changes or is retracted.

What is PushHub?

PushHub is a content syndication system built with:

  • Pyramid w/ ZODB
  • Redis
  • Feedparser
  • Solr

It uses the Pubsubhubbub standard from Google.

PHP - I can't believe I'm about to do this...

PushHub can easily be called from PHP based CMSs like Drupal and WordPress as well as from Plone. See slides 18-21 above for sample code.

Demo

You can easily create content across several Plone and Drupal websites.

  • When creating content, simply publish and share to PushHub
  • Then on another site you can search PushHub and select content to show up in a related articles widget or other content areas
  • You can control if just the title shows up or an extract
  • You can even copy an entire article from one site to the other and keep it in sync and with the canonical source marked correctly for SEO
  • When you update the master copy, all other copies get a notification and update instantly

See the slides for examples or the webinar recording in the link below.

Learn More:

Netsight Developers: Plone Intranet Development Sprint update (July 2014)

2014-07-14T13:43:58Z

After another successful development sprint, we are pleased to announce the release of ploneintranet.workspace 1.0!

We covered some of the background to this development in our previous blog post, but this sprint was focussed on finishing, tidying, documenting and getting an initial release out!

ploneintranet.workspace

A core building block of the Plone Intranet solution, ploneintranet.workspace aims to provide a flexible team/community workspace solution, allowing teams of users to communicate and collaborate effectively within their own area of an intranet. Plone's extensive permissions are distilled into a set of distinct policies that control who can access a workspace, who can join a workspace, and what users can do once they are part of a workspace.

An Intro to Workspace Policies

Three realms of access are controlled via a single ‘policies’ tab on the workspace container:

External visibility

Who can see the workspace and its content?

  • Secret
    Workspace and content are only visible to members
  • Private
    Workspace is visible to non-members
    ‘Published’ Workspace content only visible to members
    ‘Public’ Workspace content visible to all
  • Open
    Workspace is visible to non-members
    ‘Published’ Workspace content visible to all

Join policy

Who can join / add users to a workspace?

    • Admin-managed
      Only workspace administrators can add users
    • Team-managed
      All existing workspace members can add users
    • Self-Managed
      Any user can self-join the workspace

Participation policy

What can members of the workspace do?

  • Consumers
    Members can read all published content
  • Producers
    Members can create new content, and submit for review
  • Publishers
    Members can create, edit and publish their own content (but not the content of others)
  • Moderators
    Members can create, edit and publish their own content and content created by others.

Policy Scenarios

These policies are designed to be combined in ways that produce sensible policy scenarios. Some example use cases might be:
  • Open + Self-managed + Publishers = Community/Wiki
  • Open + Admin-managed + Consumers = Business Division/Department
  • Private + Team-managed + Publishers = Team

Give it a try!

We would love you to try this package out and give us your feedback. For more information including how to download and install, see the documentation at readthedocs.org.

Further reading:


July 11, 2014

Andreas Jung: Firmenänderung

2014-07-11T15:52:31Z

Firmenänderung

Andreas Jung: Goodbye MongoDB

2014-07-11T15:46:19Z

2014-07-11T11:46:05Z

Firmenmitteilung

July 10, 2014

Abstract Technology: 5 reasons to adopt Plone 5 as soon as released

by Maurizio Delmonte at 2014-07-10T15:51:25Z

A completely redefined user interface, with improvements on usability and accessibility, is just the more visible advantage that the new version of Plone will provide.

Six Feet Up: Merging 120 Sites into a Multisite Plone Solution (PSM14 Recap)

by Clayton Parker at 2014-07-10T15:40:00Z

This is a recap of my presentation at the Plone Symposium Midwest 2014.

Managing Chaos: Merging 120 Sites into a single Plone Multisite Solution

Who Am I?

Clayton Parker
  • Director of Engineering, Six Feet Up
  • Organizer, IndyPy, Indianapolis Python Users Group

What will we Learn?

This talk covers:

  • Six Feet Up's multisite solution with Plone and Lineage
  • How we went about consolidating 120 Plone sites into one multisite solution in less than 90 days
  • How this improved performance

Discovery

Penn State has been a long standing client with Six Feet Up. The College of Liberal Arts asked us to look into the performance of their 120 eLearning course sites. We saw this as a great opportunity for them to save time and money by consolidating everything instead of maintaining all the separate sites.

Old Site Creation Workflow

The old method of creating a new course involved copying, then pasting a Plone site within their Zope instance. This involved a lot of manual steps to fill in the placeholder metadata in the pasted course. This also required a catalog clear and rebuild since the paths to all the objects had changed and the pasted site would not function correctly.

Performance

One of the main issues with the old implementation was that there were 120+ copies of all the objects needed to create a Plone site. That means 120 catalogs, Quickinstallers, properties, registries, etc. There was a lot of needless duplication in the scenario which hurt the performance of each site. Since they were all housed in one Data.fs, there was no easy way to avoid the loading of all these duplicate objects.

Migration

We used Transmogrifier to export all of the content from the 120 sites into something we could later pull back into a single Plone site. When pulling the content back into Plone, each department was set up with its own folder. Inside those department folders, the courses were added. This new layout provides more flexibility for giving access to a whole department or to just one course.

How is it made?

Lineage Multisite Management for Plone

For the department and course types we utilized Lineage, an open source Plone product built by Six Feet Up. Lineage is a simple product that allows the course or department to appear as an autonomous Plone. It utilizes the NavigationRoot in Plone to make the navigation menu, search, portlets and anything else that does a search appear to be rooted at that level.

New Site Creation Workflow

Now, whenever a new course needs to be added, it's just like creating any other new content in Plone. In each department folder there is the option under "Add new..." to add a new Course folder.

These course folders have additional fields for the author, course number and banner images. Things that were previously manually filled out are now just a part of the content creation process.

In addition to having the fields on the type, events are used to create content and automatically add faculty and staff to the course.

Permissions

Since we are still utilizing Plone and it's folder structure, we can still use the built-in permission system. Global roles and groups can apply to the whole site or local roles can be given to a user or group at a department and course level. This provides an easier way to manage users across the 120+ sites.

Disadvantages

There are a few disadvantages that come along with a system housed in one Plone site. If there was a need to split out a course or department into a new site, this would require a migration.

Since everything is in one Plone site, any add-ons or properties are going to apply to the whole site. It would be more difficult to restrict the functionality of an add-on to one particular course or department.

Advantages

On the flip side, having one set of add-ons to manage can be easier than 120 different configurations of installed add-ons. Upgrading or re-installing is more of a one click process with less headaches.

Since the one Plone site houses all the content, it can be easily shared across departments or courses. No need for any external access, it can just be used directly.

Upgrading the Plone sites will be much easier moving forward. Instead of having to deal with 120+ migrations, there is just one.

The biggest advantage here was the performance boost that was gained. The system can handle the load of all those procrastinating students logging in on Sunday to finish their assignments much better now!

Learn More:

Andreas Jung: Plone - the broken parts - Member schema extenders and plone.api

2014-07-10T06:56:22Z

This is a loose series of blog posts about parts of Plone that I consider as broken from the prospective of a programmer. The blog entries are based on personal experiences with Plone over the last few months collected in new Plone 4.3 projects and some legacy projects but they also reflect experienced learned from other non-core Plone developers involved in these projects (developers on the customer side).

2014-07-10T06:56:02Z

This is a loose series of blog posts about parts of Plone that I consider as broken from the prospective of a programmer. The blog entries are based on personal experiences with Plone over the last few months collected in new Plone 4.3 projects and some legacy projects but they also reflect experienced learned from other non-core Plone developers involved in these projects (developers on the customer side).