Planet Plone

This is where developers and integrators write about Plone, and is your best source for news and developments from the community.

May 02, 2017

Reinout van Rees: HTTPS behind your reverse proxy

by Reinout van Rees at 2017-05-02T13:13:00Z

We have a setup that looks (simplified) like this:

https://abload.de/img/screenshot2017-05-02a69bku.png

HTTP/HTTPS connections from browsers ("the green cloud") go to two reverse proxy servers on the outer border of our network. Almost everything is https.

Nginx then proxies the requests towards the actual webservers. Those webservers also have nginx on them, which proxies the request to the actual django site running on some port (8000, 5010, etc.).

Until recently, the https connection was only between the browser and the main proxies. Internally inside our own network, traffic was http-only. In a sense, that is OK as you've got security and a firewall and so. But... actually it is not OK. At least, not OK enough.

You cannot trust in only a solid outer wall. You need defense in depth. Network segmentation, restricted access. So ideally the traffic between the main proxies (in the outer "wall") to the webservers inside it should also be encrypted, for instance. Now, how to do this?

It turned out to be pretty easy, but figuring it out took some time. Likewise finding the right terminology to google with :-)

  • The main proxies (nginx) terminate the https connection. Most of the ssl certificates that we use are wildcard certificates. For example:

    server {
      listen 443;
      server_name sitename.example.org;
      location / {
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $http_host;
        proxy_set_header X-Forwarded-Proto https;
        proxy_redirect off;
        proxy_pass http://internal-server-name;
        proxy_http_version 1.1;
      }
      ssl on;
      ....
      ssl_certificate /etc/ssl/certs/wildcard.example.org.pem;
      ssl_certificate_key /etc/ssl/private/wildcard.example.org.key;
    }
    
  • Using https instead of http towards the internal webserver is easy. Just use https instead of http :-) Change the proxy_pass line:

    proxy_pass https://internal-server-name;
    

    The google term here is re-encrypting, btw.

  • The internal webserver has to allow an https connection. This is were we initially made it too hard for ourselves. We copied the relevant wildcard certificate to the webserver and changed the site to use the certificate and to listen on 443, basically just like on the main proxy.

    A big drawback is that you need to copy the certificate all over the place. Not very secure. Not a good idea. And we generate/deploy the nginx config for on the webserver from within our django project. So every django project would need to know the filesystem location and name of those certificates... Bah.

  • "What about not being so strict on the proxy? Cannot we tell nginx to omit a strict check on the certificate?" After a while I found the proxy_ssl_verify nginx setting. Bingo.

    Only, you need 1.7.0 for it. The main proxies are still on ubuntu 14.04, which has an older nginx. But wait: the default is "off". Which means that nginx doesn't bother checking certificates when proxying! A bit of experimenting showed that nginx really didn't mind which certificate was used on the webserver! Nice.

  • So any certificate is fine, really. I did my experimenting with ubuntu's default "snakeoil" self-signed certificate (/etc/ssl/certs/ssl-cert-snakeoil.pem). Install the ssl-cert package if it isn't there.

    On the webserver, the config thus looks like this:

    server {
        listen 443;
        # ^^^ Yes, we're running on https internally, too.
        server_name sitename.example.org;
        ssl on;
        ssl_certificate /etc/ssl/certs/ssl-cert-snakeoil.pem;
        ssl_certificate_key /etc/ssl/private/ssl-cert-snakeoil.key;
        ...
    }
    

    An advantage: the django site's setup doesn't need to know about specific certificate names, it can just use the basic certificate that's always there on ubuntu.

  • Now what about that "snakeoil" certificate? Isn't it some dummy certificate that is the same on every ubuntu install? If it is always the same certificate, you can still sniff and decrypt the internal https traffic almost as easily as plain http traffic...

    No it isn't. I verified it by uninstalling/purging the ssl-cert package and then re-installing it: the certificate changes. The snakeoil certificate is generated fresh when installing the package. So every server has its own self-signed certificate.

    You can generate a fresh certificate easily, for instance when you copied a server from an existing virtual machine template:

    $ sudo make-ssl-cert generate-default-snakeoil --force-overwrite
    

    As long as the only goal is to encrypt the https traffic between the main proxy and an internal webserver, the certificate is of course fine.

Summary: nginx doesn't check the certificate when proxying. So terminating the ssl connection on a main nginx proxy and then re-encrypting it (https) to backend webservers which use the simple default snakeoil certificate is a simple workable solution. And a solution that is a big improvement over plain http traffic!

April 24, 2017

eGenix: PyDDF Python Spring Sprint 2017

2017-04-24T11:00:00Z

The following text is in German, since we're announcing a Python sprint in Düsseldorf, Germany.

Ankündigung

PyDDF Python Frühlings-Sprint 2017 in
Düsseldorf


Samstag, 06.05.2017, 10:00-18:00 Uhr
Sonntag, 07.05.2017, 10:00-18:00 Uhr

trivago GmbH,  Karl-Arnold-Platz 1A,  40474 Düsseldorf

Informationen

Das Python Meeting Düsseldorf (PyDDF) veranstaltet mit freundlicher Unterstützung der trivago GmbH ein Python Sprint Wochenende im Mai.

Der Sprint findet am Wochenende 6./7.5.2017 in der trivago Niederlassung am Karl-Arnold-Platz 1A statt (nicht am Bennigsen-Platz 1). Folgende Themengebiete haben wir als Anregung angedacht:
  • Openpyxl
Openpyxl ist eine Python Bibliothek, mit der man Excel 2010+ Dateien lesen und schreiben kann.

Charlie Clark ist Co-Maintainer des Pakets.
  • Telegram-Bot

Telegram ist eine Chat-Anwendung, die von vielen Nutzern verwendet wird. Telegram unterstützt das Registrieren von sogenannten Bots - kleinen Programmen, die man vom Chat aus ansteuern kann, um z.B. Informationen zu bekommen.

Im Sprint wollen wir versuchen, einen Telegram-Bot in Python zu schreiben.

  • Jython (Python in Java implementiert)

    Stefan Richthofer, einer der Jython Core Entwickler, wird anwesend sein und über ein Jython Thema sprinten, z.B.

    Using Jython:
    - Jython basics
    - Python/Java integration
    - GUI mit JavaFX in Python

    Developing Jython:
    - Jython internals
    - Bugfixes in Jython core - Können wir ein paar echte Bugs beheben?

    Experimentelles (Was ist schon implementiert? Wir probieren es aus!):
    - JyNI
    - Jython 3
Natürlich kann jeder Teilnehmer weitere Themen vorschlagen, z.B.
  • RaspberryPi-Robot (einen Roboter mit einem Raspi ansteuern)
  • u.a.

Anmeldung und weitere Infos

Alles weitere und die Anmeldung findet Ihr auf der Sprint Seite:

Teilnehmer sollten sich zudem auf der PyDDF Liste anmelden, da wir uns dort koordinieren:

Über das Python Meeting Düsseldorf

Das Python Meeting Düsseldorf ist eine regelmäßige Veranstaltung in Düsseldorf, die sich an Python Begeisterte aus der Region wendet.

Einen guten Überblick über die Vorträge bietet unser PyDDF YouTube-Kanal, auf dem wir Videos der Vorträge nach den Meetings veröffentlichen.

Veranstaltet wird das Meeting von der eGenix.com GmbH, Langenfeld, in Zusammenarbeit mit Clark Consulting & Research, Düsseldorf.

Viel Spaß !

Marc-Andre Lemburg, eGenix.com

April 03, 2017

eGenix: Python Meeting Düsseldorf - 2017-04-05

2017-04-03T08:00:00Z

The following text is in German, since we're announcing a regional user group meeting in Düsseldorf, Germany.

Ankündigung

Das nächste Python Meeting Düsseldorf findet an folgendem Termin statt:

05.04.2017, 18:00 Uhr
Raum 1, 2.OG im Bürgerhaus Stadtteilzentrum Bilk
Düsseldorfer Arcaden, Bachstr. 145, 40217 Düsseldorf


Neuigkeiten

Bereits angemeldete Vorträge

Stefan Richthofer
        "pytypes"

André Aulich
        "Python-Webanwendungen als native Desktop-Apps verteilen"

Charlie Clark
        "Frankenstein — OO-Komposition statt Vererbung"

Weitere Vorträge können gerne noch angemeldet werden. Bei Interesse, bitte unter info@pyddf.de melden.

Startzeit und Ort

Wir treffen uns um 18:00 Uhr im Bürgerhaus in den Düsseldorfer Arcaden.

Das Bürgerhaus teilt sich den Eingang mit dem Schwimmbad und befindet sich an der Seite der Tiefgarageneinfahrt der Düsseldorfer Arcaden.

Über dem Eingang steht ein großes "Schwimm’ in Bilk" Logo. Hinter der Tür direkt links zu den zwei Aufzügen, dann in den 2. Stock hochfahren. Der Eingang zum Raum 1 liegt direkt links, wenn man aus dem Aufzug kommt.

>>> Eingang in Google Street View

Einleitung

Das Python Meeting Düsseldorf ist eine regelmäßige Veranstaltung in Düsseldorf, die sich an Python Begeisterte aus der Region wendet.

Einen guten Überblick über die Vorträge bietet unser PyDDF YouTube-Kanal, auf dem wir Videos der Vorträge nach den Meetings veröffentlichen.

Veranstaltet wird das Meeting von der eGenix.com GmbH, Langenfeld, in Zusammenarbeit mit Clark Consulting & Research, Düsseldorf:

Programm

Das Python Meeting Düsseldorf nutzt eine Mischung aus (Lightning) Talks und offener Diskussion.

Vorträge können vorher angemeldet werden, oder auch spontan während des Treffens eingebracht werden. Ein Beamer mit XGA Auflösung steht zur Verfügung.

(Lightning) Talk Anmeldung bitte formlos per EMail an info@pyddf.de

Kostenbeteiligung

Das Python Meeting Düsseldorf wird von Python Nutzern für Python Nutzer veranstaltet.

Da Tagungsraum, Beamer, Internet und Getränke Kosten produzieren, bitten wir die Teilnehmer um einen Beitrag in Höhe von EUR 10,00 inkl. 19% Mwst. Schüler und Studenten zahlen EUR 5,00 inkl. 19% Mwst.

Wir möchten alle Teilnehmer bitten, den Betrag in bar mitzubringen.

Anmeldung

Da wir nur für ca. 20 Personen Sitzplätze haben, möchten wir bitten, sich per EMail anzumelden. Damit wird keine Verpflichtung eingegangen. Es erleichtert uns allerdings die Planung.

Meeting Anmeldung bitte formlos per EMail an info@pyddf.de

Weitere Informationen

Weitere Informationen finden Sie auf der Webseite des Meetings:

              http://pyddf.de/

Viel Spaß !

Marc-Andre Lemburg, eGenix.com

September 14, 2014

Tim Knapp: Lightning Talks

2014-09-14T04:00:47Z

LCAuckland

LCA: 12-16 January 2015

Program online in next year or 2.

Singapore also put in proposal for future LCA.

6-700 attendees hoped.

Heritage Preserve #WeWantJam

Grabbed 30TBs of .co.nz internet.

2 days of catering, hacking.

If you’d like to do it again tweet #WeWantJam

Less Boilerplate

import argparse

Command line

http://github.com/aliles/begins

Command line programs for busy developers.

Show us the world

rbenv - isolate different

pyenv shell 3.4.1

pyenv virtualenv name

pyenv local dirname

MySQL performance schema

Debugging schema for the database.

Can configure at runtime.

5.5 or 6 dbs.

Julia

Very pythonic. Stolen lots of good ideas from Python.

Compiled just in time. Timing a file that does nothing it takes 2.25s.

The Python Promotion Pamphlet

International Python Promotion Pamphlet. Glossy & stylised.

Creating NZ-version. Will have high-quality printable version and web-version.

Make your company more visible to Python programmers rather than pay recruitment agents.

Blackbox

http://sheltered-forest-9460.herokuapp.com

http://github.com/tuxbert/blackbox

NZPUG

When you subscribe to the mailing list (607 members on mailing list) you aren’t a member of the incorporated society.

Jessica McKellar’s Kiwi PyCon73,000 views. Far more than Guido McV’s keynote at PyCon US.

Thomi is commandeering LT and announcing my leaving committee and asking if someone can join NZPUG committee. See committee members if would like to join.

Docker for Python

Will do to IT industry what container box did to shipping industry.

LXC (Linux Lightweight Containers) + UnionFS.

Docker file is essentially like a bash script file of commands.

You can inherit from other docker files.

Fig - Vagrant for Docker.

Ship deltas not images.

Shipyard - 3rd party ecosystem.

Gotchas

  • Use docker-osx, PyCharm

Object Factories

Sole object is to create objects for testing purposes.

Don’t care about what object is. Just assign random attributes to objects.

Code Obfuscation

Weird bugs: object() > object()

If doing tests against time it will fail at midnight depending on your timezone.

False = True

Arduino + Thermal Printer + Python

Arduino sketch, Python script, Git

http://github.com/nick-NZ/Arduino-pull-request-printer

Dynamically creating Python tutorials and presentations

Massey Computer Science moved to Python in 2011.

Emacs: an operating system disguised as an editor.

http://www.orgmode.org

Org-babel - generate code.

Org-slidy

Bad-ass Postgres Tricks

http://bit.ly/pgtrix1

  1. Template Databases - clone dbs
  2. Estimated Counts
  3. Hstore - values have to be strings
  4. Smart Indexes
  5. Schema Change Locks
  6. PL/Python

“One button” test and deploy on AWS

Use Vagrant+Ansible

Scripted creation of AMI and deployment.

Python Antipatterns

Better: list comprehension

Better: raise AssertionError

Better: if pattern in input_str

Don’t pass empty list as default are.

if 5 <= i < 10: print “something”

Better: return NotImplementedError

Map reduction using Python scripts

Script to create ArcMap maps.

Making weird maps with Python

Pulled data from census + meshblock boundaries (Koordinates.com) + map box (tile mill) + GDAL.

Should have used Fiona + Shapely.

June 12, 2012

Jean-Michel Francois: How to patch ckeditor in Plone

by Jean-Michel FRANCOIS at 2012-06-12T09:29:35Z

Sometimes there are changes you can't upstream because they are project specific and also because they can't be override in a simple way. I take a current use case I have: change the default values of table plugin from collective.ckeditor.

By default the table add form of ckeditor is 500 pixels wide. A customer want to change that default behavior for good reason: He always use 100% instead and this is boring to have bad default values.

So first as developer you can checkout the current collective.ckeditor:

git clone git://github.com/collective/collective.ckeditor.git

Try to find which files you have to change. In our case

  • collective/ckeditor/browser/ckeditor/_source/plugins/table/dialogs/table.js
  • collective/ckeditor/browser/ckeditor/plugins/table/dialogs/table.js

Do the modification and then call the command from the project folder:

git diff --no-prefix > table-default.diff

Do not forget the --no-prefix option or your patch will not usable. Now you have your patch. You need to deploy it on your project. Because we use buildout to deploy our project here is a simple example ckeditor-patch.cfg:

[buildout]
extends=http://dist.plone.org/release/4.1-latest/versions.cfg
parts =
    instance
    patch-ckeditor
[instance]
recipe = plone.recipe.zope2instance
user = admin:admin
eggs=
    Plone
    collective.ckeditor
zcml =
    collective.ckeditor
[patch-ckeditor]
recipe = collective.recipe.patch
egg = collective.ckeditor
patches = table-default.diff

This buildout will install Plone with collective.ckeditor and apply the our patch on it. Here is the console output:

$ bin/buildout -c test-patch.cfg
Installing instance.
Getting distribution for 'collective.ckeditor'.
warning: no previously-included files matching '*pyc' found anywhere in distribution
Got collective.ckeditor 3.6.2.
Generated script '/Users/toutpt/myproject/bin/copy_ckeditor_code'.
Installing patch-ckeditor.
patch: reading patch /Users/toutpt/myproject/table-default.diff
patch: total files: 2  total hunks: 2
patch: in /Users/toutpt/.buildout/installed_eggs/collective.ckeditor-3.6.2-py2.6.egg...
patch: processing 1/2:      /Users/toutpt/.buildout/installed_eggs/collective.ckeditor-3.6.2-py2.6.egg/collective/ckeditor/browser/ckeditor/_source/plugins/table/dialogs/table.js
patch: successfully patched /Users/toutpt/.buildout/installed_eggs/collective.ckeditor-3.6.2-py2.6.egg/collective/ckeditor/browser/ckeditor/_source/plugins/table/dialogs/table.js
patch: processing 2/2:      /Users/toutpt/.buildout/installed_eggs/collective.ckeditor-3.6.2-py2.6.egg/collective/ckeditor/browser/ckeditor/plugins/table/dialogs/table.js
patch: successfully patched /Users/toutpt/.buildout/installed_eggs/collective.ckeditor-3.6.2-py2.6.egg/collective/ckeditor/browser/ckeditor/plugins/table/dialogs/table.js

And if you launch again the buildout:

$ bin/buildout -c ckeditor-patch.cfg
Installing instance.
Installing patch-ckeditor.
patch: reading patch /Users/toutpt/myproject/table-default.diff
patch: total files: 2  total hunks: 2
patch: in /Users/toutpt/.buildout/installed_eggs/collective.ckeditor-3.6.2-py2.6.egg...
patch: processing 1/2:   /Users/toutpt/.buildout/installed_eggs/collective.ckeditor-3.6.2-py2.6.egg/collective/ckeditor/browser/ckeditor/_source/plugins/table/dialogs/table.js
patch: already patched   /Users/toutpt/.buildout/installed_eggs/collective.ckeditor-3.6.2-py2.6.egg/collective/ckeditor/browser/ckeditor/_source/plugins/table/dialogs/table.js
patch: processing 2/2:   /Users/toutpt/.buildout/installed_eggs/collective.ckeditor-3.6.2-py2.6.egg/collective/ckeditor/browser/ckeditor/plugins/table/dialogs/table.js
patch: already patched   /Users/toutpt/.buildout/installed_eggs/collective.ckeditor-3.6.2-py2.6.egg/collective/ckeditor/browser/ckeditor/plugins/table/dialogs/table.js

So now you have a working Zope instance with Plone and a patched CKEditor. Every one should be happy!

Warning 1: Like as you can see if you are using shared eggs repository, your patch will be applied for all projects using this egg.

Warning 2: Plugins are fetched on demand (ajax) and there is 24 hours of browser cache. So it means you have to empty your browser cache when you are making changes to javascripts or wait 24 hours. Headers of the table.js response which show this browser cache:

Cache-Control:public,max-age=86400
Content-Length:8733
Content-Type:application/javascript
Date:Tue, 12 Jun 2012 09:07:18 GMT
Expires:Wed, 13 Jun 2012 09:07:18 GMT
Last-Modified:Tue, 12 Jun 2012 08:47:06 GMT

December 21, 2015

Reinout van Rees: "Complex" and "complicated"

by Reinout van Rees at 2015-12-21T10:17:00Z

I'm not a native english speaker, so sometimes I need to refresh my memory as to the exact meaning of a word. In this case the exact difference between complex and complicated.

I found an english.stackexchange.com page with some nice answers. According to the first answer:

  • Complex is about the number of parts. The more (different) parts, the more complex.
  • Complicated is about how hard/difficult something is.

Another answer pointed at the relevant part of the Zen of Python:

Simple is better than complex.
Complex is better than complicated.

So when programming, you can divide up a task or problem into separate parts. The more parts, the more complex. Simple is better than complex, so don't add too many parts if you don't need them.

On the other hand, if you want to do everything in one part, it might end up being too difficult, too complicated. Making it more complex (more parts) is better in that case.

It is a trade-off. And it takes experience and a bit of Fingerspitzengefühl to make the right trade-off.

How to learn it? Talk with more experienced programmers is one way. Another way is to explicitly review your programs yourself with those two lines from the Zen of Python. I'll show them here again to drive them home:

Simple is better than complex.
Complex is better than complicated.

Ask yourself questions like this:

  • Do I still understand my own program? Did I make it too complicated?

  • Did I split it up enough? Or are there 400-line functions that I might better cut up? Into a more complex, but clearer, whole?

    Well-separated parts might make reasoning about your complicated problem easier.

  • Did I split everything up too much? Do you have more parts than your brain can handle?

    Your brain's CPU has room for some 7 variables. Too many parts might make it impossible for you to work on your own code :-)

November 27, 2012

: Delivering Produce & Publish Authoring Environment to DEISA

2012-11-27T12:24:13Z

ZOPYX liefert Publishing Umgebung für DEISA

September 18, 2012

RedTurtle: Plone Registry Strikes Back

by Luca Fabbri at 2012-09-18T08:15:00Z

My last post was about how to use Plone registry in a clean way, also when storing complex data inside it.

Today we'll talk about the Plone registry again! I've some other tips to share!

Another good article

I'm not the only one to be inspired by the registry. After my first article about this subject, I came upon Complex Plone registry settings revisited. Take a read, is another interesting approach!

What's new?

I needed to implement the same product that inspired me for the last article, collective.analyticspanel, with some new features requested by a customer of ours.

Regardless the specific features, what is important is that:

  • it somehow changes the type of one of our existing field;
  • it adds a new field (that should be kept separated from the others).

 

Changing the field type

collective.analyticspanel 0.2It can be common problem: in previous version (0.2) of collective.analyticspanel the users were provided with a checkbox (boolean field) named "Apply to the whole subtree".

The customer asked to change the set of possible values from "True or False" to "True, False or False-with-exceptions".

The easiest way of doing it (the Dark Side) is to add a new boolean field (another checkbox). Considering that we were migrating from an old version, this suboptimal solution may indeed be used.

But we should ask ourselves: what kind of control we'd like to see, if we could develop the solution from scratch? For sure, a combobox (so: a Tuple of values).

collective.analyticspanel-0.3.0-04.pngI like the Light Side so, on version 0.3, I preferred to migrate old Boolean values to the new Tuple.

The way of migrating is to write some Python code during the product upgrade. This is much simpler than you can expect because in fact we are not really changing the field type, as long as we are changing the old name of the field, apply_to_subsection with the more appropriate apply_to.

Thus, we are copying the old (boolean) value to a new (non-boolean) field.

        ...
        for path_config in old_settings.path_specific_code:
            apply_to_subsection = path_config.apply_to_subsection
            del path_config.apply_to_subsection
            if apply_to_subsection:
                path_config.apply_to = u'subtree'
            else:
                path_config.apply_to = u'context'
        ...
Note that our fields are not basic registry elements, but sub-elements of path_config, one of the complex objects we played with in the previous article.

For a complete example see the whole migration code.

So we delete the old (Boolean) value, and translate the new value to a simple string (using the items from the vocabulary of the new combobox).

I want a new fieldset

Also the second task implied adding a new field.

Keeping an eye on the usability of the resulting interface, we saw that the form was becoming a little "chaotic" (I like simple forms, don't scare users if you can avoid it!). As the new field's default value is OK for 95% percent of product users, we know that the best choice was to move it into another "Advanced" fieldset.

Now the question is: how to add a fieldset in Plone registry? The only documentation I found is an article from Malthe Borch (the man who saved us all by releasing z3c.jbot): "Getting registry settings in Plone to display in fieldsets".

By following that article, I reached the target. How did it work for my needs?
Without registry settings, all fields belong to a "Default" fieldset (not visible in Plone inasmuch as it's the one and only fieldset).

To split fields into more that one fieldset, you need a combination of settings via control panel view and separate interfaces.
The general idea is to keep separate interfaces for every fieldset: in that case, the registry needs to use a new interface that inherits from both.

This is how interfaces looks like:

    class IAnalyticsSettings(Interface):
        """Settings used in the control panel for analyticspanel: general panel
        """
        ... your default fields
    ...
    class IAnalyticsAdvancedSettings(Interface):
        """Settings used in the control panel for analyticspanel: advanced panel
        """
        ... your advanced field
    ...
    class IAnalyticsSettingsSchema(IAnalyticsSettings, IAnalyticsAdvancedSettings):
        """Settings used in the control panel for analyticspanel: unified panel
        """

And this is how it's handled by control panel:

    class FormAdvanced(group.Group):
        label = _(u"Advanced settings")
        fields = field.Fields(IAnalyticsAdvancedSettings)
    class AnalyticsSettingsEditForm(controlpanel.RegistryEditForm):
        """Media settings form.
        """
        schema = IAnalyticsSettingsSchema
        fields = field.Fields(IAnalyticsSettings)
        groups = (FormAdvanced,)
        ...

Another important change is in the Generic Setup registration: while the old version of the product was registering the IAnalyticsSettings interface, the new one must register the composed ones: IAnalyticsSettingsSchema (keep this information for later use).

<registry>
<records interface="collective.analyticspanel.interfaces.IAnalyticsSettingsSchema" />
</registry>

collective.analyticspanel-0.3.0-05.pngDone! The final result is quite agreeable!

Migration

Let say you are a version 0.2 user, then you migrate to version 0.3 but finally you want to remove the product. In that scenario I found a little issue related to our change of interface.

The old (0.2) uninstall procedure was like this:

<registry>
<record interface="collective.analyticspanel.interfaces.IAnalyticsSettings"
field="general_code"
delete="True"/>
<record interface="collective.analyticspanel.interfaces.IAnalyticsSettings"
field="error_specific_code"
delete="True"/>
<record interface="collective.analyticspanel.interfaces.IAnalyticsSettings"
field="path_specific_code"
delete="True"/>
</registry>

The new one (0.3) is this:

<registry>
<record interface="collective.analyticspanel.interfaces.IAnalyticsSettingsSchema"
field="general_code"
delete="True"/>
<record interface="collective.analyticspanel.interfaces.IAnalyticsSettingsSchema"
field="error_specific_code"
delete="True"/>
<record interface="collective.analyticspanel.interfaces.IAnalyticsSettingsSchema"
field="path_specific_code"
delete="True"/>
<record interface="collective.analyticspanel.interfaces.IAnalyticsSettingsSchema"
field="folderish_types"
delete="True"/>
</registry>

The weird effect of uninstalling 0.3 version of the product is that we still see in the general Plone registry the old values registered through version 0.2.

The solution is simple: during the product migration we shall also run a simple generic setup import step (we named it "clean_registry") that launches an uninstall procedure that cleans the registry from the values used in old version:

<registry>
<!-- those registry entries were used in version 0.2 and below -->
<record interface="collective.analyticspanel.interfaces.IAnalyticsSettings"
field="general_code"
delete="True"/>
<record interface="collective.analyticspanel.interfaces.IAnalyticsSettings"
field="error_specific_code"
delete="True"/>
<record interface="collective.analyticspanel.interfaces.IAnalyticsSettings"
field="path_specific_code"
delete="True"/>
</registry>

What about your beloved Plone 3 compatibility?

Ugly Plone 3 registryOnce again, this product is still compatible with Plone 3 but with limitations:

  • There's a bug in Plone 3 versions of plone.app.vocabulary so we can't use a combobox with a vocabulary in the Plone registry. So: on Plone 3 we are not using a combobox, but a simple textarea (user can't select values, but need to write them manually)
  • Fieldsets are not working: luckily you'll get no errors, but simply they are not displayed (and the final UI is really ugly!)

Conclusion

I hope that this article can help you to maintain order in the Plone registry!

Photos taken from emigepa

January 29, 2011

Hector Velarde: Implementing a photo gallery viewlet for CompositePack using jQuery

by hvelarde at 2011-01-29T00:19:03Z

CompositePack is a beautiful piece of code written by Godefroid Chapelle. We use CompositePack at La Jornada to create the frontpage for the breaking news edition.

Some time ago I was asked to create a viewlet for a photo gallery and we started testing some gallery products to accomplish this task. We tried FriendlyAlbum, Plone SmoothGallery, plonegalleryview and, the best and most promising by far, Slideshow Folder.

All products had limitations that keep me away from using them as a base for my solution. I didn't wanted to create any new content type neither, so the approach I followed was this:

  • use a folder as a container for the gallery; the name and description of the folder was going to be the title and introduction text for the gallery
  • use Image as the content type for the photos; the name and description of every image was going to be the alternative text (the alt attribute) and caption of the photo

Having this in mind, I started analyzing how to integrate one of the many JavaScript libraries available to create the gallery. I wanted to use KSS, as this framework is on the way to become the standard on the Plone world, but I abandoned the idea soon. KSS is still on development and most of the current work is being done on the Plone 3.0 branch. The only thing I found for Plone 2.5 was a product named PloneAzax with a 1.0 release in Alpha state. PloneAzax was more like a demo of how AJAX would be used in Plone 3.0, than an usable product. Worst: it had some conflicts with CompositePack and I didn't wanted to mess with it.

After some research on the web I decided to use jQuery JavaScript Library. jQuery is fast and concise, and it lets you traverse HTML documents and handle events. From here all my work was pretty straightforward... well I had to fight a little bit with jQuery and IE, but that's another story.

I created the viewlet the usual way and I inserted the JavaScript code with the help of a Python script. To get the list of images in a given folder I used context.atctListAlbum(images=1). I took this idea from atct_album_view.pt, the page template used to view the images in a folder as small thumbnails. I registered the viewlet for Folder content type using a modified version of compositepack-customisation-policy.py and the JavaScript file using portal_javascripts, the JavaScript Registry.

The photo gallery is working fine on IE 6.0+, Firefox 1.5+, Safari 2.0+ and Opera 9.0+. Please note that only one gallery is allowed per page. It would be nice to add some effects, but that will have to wait.

The photo gallery viewlet in CompositePack's design view

You can see here the final versions of
image_gallery_viewlet.pt and getSlideshowScript.py.

CompositePack rocks! I hope you enjoy it as much as I do.

July 28, 2011

Connexions Blog: Connexions for Android Updated

by Ed Woodward at 2011-07-28T13:42:04Z

Connexions for Android was updated today to correct a bug reported by users. The update is available in the Android Market or on our
Connexions for Android was updated today to correct a bug reported by users. The update is available in the Android Market or on our website. If you have used our Android app, we would love to get some feedback on your experience. Please email us at techsupport at cnx.org.

February 24, 2011

Jon Stahl: Migrating WordPress from Apache to Cherokee

by Jon Stahl at 2011-02-24T04:17:19Z

[Sorry, this is gonna be a geeky one.  You've been warned.]

For the past couple of years, I’ve had this blog on a 256MB VPS slice at Rackspace Cloud, which, overall, has been a very nice ~$12/month experience.  But I’ve chafed a bit at having to restart my Apache web server instance every so often, because it has run out of RAM.  I’m just running a personal blog, goshdarnit.  How can that burn up 256MB of RAM!?!? Turns out the culprit is Apache.  Or, more accurately, the combination of Apache + PHP.  PHP prevents Apache from running in its more memory-efficient “worker” configuration, and the result is that even a site with almost no traffic can easily run out of memory.  Worse, tuning Apache to keep memory usage low means that it starts to perform really poorly.  Bottom line: it’s actually really hard to run a simple WordPress (or other PHP apps) behind Apache on a virtual private server with limited memory.

I’d been casting about for a solution, and had almost settled on using Ngnix, the increasingly popular open-source web server that lots of my friends in the Plone community really love.  But, truth be told, I was a bit intimidated by configuring it to work with PHP, even though it’s far simpler than Apache (which I also barely understand).

Then, David Bain turned me on to Cherokee.  Cherokee is a new-ish open-source web server that is designed to combine the speed of Ngnix with insane ease of configuration + deployment via a user-friendly point-and-click web administration panel.  And, sure enough, it didn’t disappoint.  In about an hour, I was able to install Cherokee, configure PHP, point Cherokee at my WordPress instance, and migrate a few rewrite rules to handle my WordPress shortlinks.

My site is now cranking out over 100 pages/second while RAM usage is well under the maximum and we’re never swapping into virtual memory. Suddenly, the WordPress editing interface feels reasonably responsive.  Vroom!

Looking into the future, I’m pretty excited about the potential to easily deploy Plone behind Cherokee.  Cherokee has built-in, easy-to-configure uWSGI support, which means that we’ll be able to start messing with Plone 4.1 + Cherokee.  (Plone 4.1 will ship with Zope 2.13, which is the first Zope 2 release to ship with unofficial WSGI support.)  This will remain “experimental” for a release or two, until the community’s had time to explore & document best practices.

March 06, 2015

Connexions Developers Blog: Rewrite Technologies

by Ed Woodward at 2015-03-06T14:32:01Z

Our team has been hard at work rewriting the OpenStax CNX site.  A frequently asked question is "What tech are you using?". This post covers a high level overview of the new site.
Our team has been hard at work rewriting the OpenStax CNX site.  A frequently asked question is "What tech are you using?". This post covers a high level overview of the new site.

Architecture


Rewrite is a Single-Page App which means most of the logic lives in a Javascript client. The Client accesses data via REST APIs that are written in Python.  All of our data is stored in a PostgresSQL database. The details of the separate components follow.

Webview

Webview is the Javascript Client.  The basis of the Client is Backbone.js and Bootstrap.  We use several other Javascript packages as well.
Most of the Javascript is written in Coffeescript and compiled to Javascript.  CSS is developed using Less.

Webview requests json from the Archive via REST APIs.  The json contains the HTML for the content and any metadata. The Client parses the json and displays it.

Editing is a separate view in Webview.  When the user selects to edit, the views are swapped out. The new editor is based on the open-source HTML5 editor Aloha.  We have added several plugins to Aloha that are textbook editing specific.  The development on the editor was done by our team and the OERPub team with OERPub doing the bulk of the work.

We are using Nginx as our web server.

Archive

Archive stores published content and handles search. Content retrieving and search are handled via APIs written in Python. When content is requested, the json is built via stored procedures in Postgres. The json is built using the json functions in Postgres.

Search is done with optimized queries.  We are caching subject and one word searches long term and all searches short term to improve performance.  When a user pages through search results, the cached result is used to populate the next page.

Archive will run on an WSGI compatible server.  We are currently using Waitress as our server.

Publishing

The publishing application integrates with the Archive database. It allows users and third-party applications to publish content to the Archive, where it can be read and distributed to the public. Publishing is built similar to Archive, but differs in many ways. Archive is a read-only content API. Publishing provides an additional set of APIs that handle the publishing workflow, which includes user interactions like license and role (e.g. author or translator) acceptance, as well as the triggering of export files. Users of the OpenStax CNX system will typically never directly interact with this application. Almost all of the backend business logic is handled within the publishing application.

Authoring

Unpublished content is stored in Authoring in Postgres.  Authoring also has APIs used by the Editor and the Workspace. When content is published, an EPUB is generated and passed from Authoring to Publishing.  The EPUB format was selected to pass information between components because it encapsulates all of the info needed.

The Workspace is a listing of all content a user has access to edit. Books and Pages can be created in the Workspace and they can be deleted as well.

OpenStax Accounts

Users are now stored in a shared accounts component.  This was developed so users could have the same account on all OpenStax sites. All CNX users have been migrated to the new Accounts. Accounts uses OAuth so users will also be able to log in using Google, Twitter and Facebook. CNX will no longer create CNX user accounts.  New users will need to use one of their existing OAuth accounts to log in.

Logging

We are currently logging information, errors and user interactions to Syslog.  Our goal is to load the logs into Graphite so we can visually see how our site is being used.

Transformation Services

Transformation Services generates Export files(PDF, EPUB, Zip, etc.) and imports content to the editor. The initial design uses the same import and export code from Legacy CNX inside of a messaging system wrapper.

The messaging using RabbitMQ and several messaging queues. The queues will give us persistence of the file generation requests.  The requests will be sent by Publishing after content has been added to Archive. Publishing will pass an EPUB to Transformation Services that contains all of the data needed to generate the files.  

OpenStax CNX is a deceptively complex site that has many moving parts.  Our goal with this architecture was to design a component based system that can be easily updated and tested without impacting all of the site. All of our code is on Github.

Many thanks to CNX team members Michael Mulich and Derek Kent for reviewing and contributing to this post.

February 05, 2016

Gil Forcada: Make code analysis cleanups brainless

by gforcada at 2016-02-05T00:56:34Z

I wrote a small guide with a step by step instructions on how to cleanup a package code so that it follows our Plone style guide.

To add sugar on top, the packages that are already monitored for it, point you to that same guide, see it in action.

Enjoy!

August 13, 2012

November 17, 2010

Netsight Developers: Zope Page Templates in Google App Engine

2010-11-17T19:59:01Z

At EuroPython last year, Matt Hamilton gave a talk on using Zope Page Templates (ZPT) outside of Zope. ZPT uses the Template Attribute Language (TAL) to create dynamic templates that you can use in your own web applications, reporting frameworks, documentation systems or any other project.

Why TAL?

The point of this post isn't to go into detail about why you should use ZPT/TAL. Suffice to say that TAL:

  • Makes well-formed XHTML easy
  • Ensures that you close all elements and quote attributes
  • Escapes all ampersands by default & -> &amp;

The Django Templating Language:

<ul>
  {% for name in row %}
    <li>{{name}}</li>
  {% endfor %}
</ul>

TAL:

<ul>
  <li tal:repeat="name row"
      tal:content="name">
    Dummy data
  </li>
</ul>

Using ZPT in your own project

There are three steps to using ZPT in your own project.

  • Install ZPT (via the zope.pagetemplate package and its dependencies)
  • Create a template file
  • Render the template file using the data from your application

Install zope.pagetemplate

I recommend using virtualenv for each new application.

# virtualenv zptdemo
# cd zptdemo
# bin/easy_install zope.pagetemplate

Create a template

mytemplate.pt

<html>
  <body>
    <h1>Hello World</h1>
    <div tal:condition="python:foo == 'bar'">
      <ul>
        <li tal:repeat="item rows" tal:content="item" />
      </ul>
    </div>
  </body>
</html>

Render the template

mycode.py

from zope.pagetemplate.pagetemplatefile \
    import PageTemplateFile
my_pt = PageTemplateFile('mytemplate.pt')
context = {'rows': ['apple', 'banana', 'carrot'],
           'foo':'bar'}
print my_pt.pt_render(namespace=context)

And that's it. This will generate the following:

Hello World

  • apple
  • banana
  • carrot

Google App Engine

This is all very well if you're able to install your own packages, but Google App Engine (GAE) doesn't allow you to do this. You can, however, include packages with your application, which is what we'll do here. When installing zope.pagetemplate into your virtual environment, you may have noticed that the following packages were installed:

  • zope.pagetemplate
  • zope.i18nmessageid
  • zope.interface
  • zope.tal
  • zope.tales

We can extract all these files and put them into our GAE application.

I've already done this for you, and you can grab a copy of the 'zope' folder on GitHub here. Put this folder into the top level of your application.

Create a template as described above, and then place the Python code that renders the template into your application's RequestHandler. It should end up looking something like this:

class MainHandler(webapp.RequestHandler):
  def get(self):
    my_pt = PageTemplateFile('main.pt')
    context = {'rows': ['apple', 'banana', 'carrot'],
               'foo':'bar'}
    self.response.out.write(my_pt.pt_render(namespace=context))

At this point you have ZPT working in GAE and you can take advantage of all the features that ZPT and TAL provide.

The application on GitHub can be download, tested and even deployed straight to Google App Engine without modification. You can see an example of it running here.

update: it's also possible to use Chameleon on Google App Engine, which as Martin says may be easier and faster.