Marenkay Father, developer, gamer, nerd

Use PHPSpec NOW

Symfony 2 based applications usually should be built driven by tests. But given the nature of such applications I found myself wondering, if unit testing with phpunit really is the right way to go. Why? Because a Symfony application usually should implement a specified set of features, and we should not be looking at our application from a functional level.

Thus after browsing through Building Quality into a Symfony app I decided to take the jump, and specify my implementation rather than to just cover functionality. This is done easily with PHPSpec.

The switch

Now let us switch the default Symfony project to PHPSpec. First I recommend to install Composer.

Using Composer we create a project template:

$ composer create-project symfony/framework-standard-edition path/ "2.3.*"

Next we will add PHPSpec and friends:

$ composer require --dev "phpspec/phpspec:2.0.*"
$ composer require --dev "phpspec/prophecy:1.2.*"
$ composer require --dev "henrikbjorn/phpspec-code-coverage:1.0.*@dev"

This will add PHPSpec along with the mocking framework prophecy, and the extension for generating coverage reports -- which you may want to see, right?

Final call

You can -- and should -- add a custom configuration for PHPSpec to your project, since it allows you to tune output, enable extensions, and also allows to call spec suites for specific bundles.

The following phpspec.yml configuration file is built for the default AcmeDemo bundle, and enables the code coverage extension with a default configuration.

extensions:
  - PhpSpec\Extension\CodeCoverageExtension

code_coverage:
  format: html
  output: .qa/coverage

formatter.name: pretty

suites:
  AcmeDemoBundle:
  namespace: Acme\DemoBundle
  spec_path: src/Acme/DemoBundle

Now you can start creating specs in src/Acme/DemoBundle/spec/ for each par of the bundle, and once done, you can run the test suite:

$ php bin/phpspec run

Check out the PHPSpec manual, and make friends with it. It might take a while, but you will sooner or later notice that you switch from mass producing code to only producing what is really needed.

Expect to see more about PHPSpec soon. One hint ahead of time: do not try to spec abstract classes, spec the implementations only!

Wine on Ubuntu

Ever since I left the Windows world completely two years ago, there have been occasions where I had to test or temporarily use a Windows only application.

Wine is quite the awesome piece of software. It emulates both 32-bit and 64-bit Windows API, and lets you run your Windows only applications under Linux.

Installing Wine

By default Ubuntu does include a default Wine version from the stable branch of Wine. Given the steady improvements in Wine allowing you to run current Windows applications, it though is desirable to have a recent development version.

There is a Ubuntu PPA available which provides up to date versions of Wine. You can install it by executing this command:

sudo add-apt-repository ppa:ubuntu-wine/ppa && sudo apt-get update

This will add the Wine PPA to your system, and ask you to accept the PPA's signing key. Once done, you may install Wine using:

sudo apt-get install wine winetricks

This will install Wine, and Winetricks. Winetricks is a helper which lets you install common Windows libraries and applications to improve your experience with running Windows applications.

This includes libraries such as DirectX or .NET, but also includes applications like the Internet Explorer. Executing Winetricks will

E.g. to list all options for installing original Microsoft libraries instead of Wine replacements, you can run

winetricks dlls list

You will recognize a few suspects there, such as codes, fonts, or even the Windows Script Host.

How to use Wine

One of the amazing things about Wine is the ability to create a sandboxed environment for your Windows applications.

Wine does so by supporting an environment variable named WINEPREFIX. By specifying a different prefix for each application or use-case you could seperate applications from each other, and finetune every prefix to the applications needs.

Here is my default starting point for creating a Wine environment.

WINEARCH=win32 WINEPREFIX=$HOME/.wine winecfg
WINEPREFIX=$HOME/.wine winetricks ddr=opengl fontsmooth=rgb sound=alsa hosts
WINEPREFIX=$HOME/.wine winetricks corefonts mfc42
WINEPREFIX=$HOME/.wine winetricks msxml3 msxml6
WINEPREFIX=$HOME/.wine winetricks riched20 riched30
WINEPREFIX=$HOME/.wine winetricks vcrun2005 vcrun2008 vcrun2010

The first line will create a data directory for Wine, and open the Wine configuration utility where I usually check the Desktop Integration tab to correct the Wine mapping for the My Documents folder. It seems like Wine always sets this to your HOME directory.

The second line will set the DirectDraw renderer to OpenGL, which does help with performance. I also select font smoothing for RGB LCDs, and select the ALSA sound driver. Also, I prefer to have an empty hosts file in my Wine sandboxes since some applications check for its' existence.

Finally, the remaining lines will install the original Microsoft Windows fonts, the MFC, MS-XML, two versions of MS RichText editor, and three common Visual C++ runtime libraries.

With that you are set for most applications.

What else?

For newer applications that use HTML views, you may have to install Internet Explorer 8 using winetricks.

WINEPREFIX=$HOME/.wine winetricks ie8

Further hints may be added later.

DevOps with KVM and Puppet (1)

Building your network never has been easier. These days DevOps are everywhere, and with tools like KVM, and Puppet freely available, you can build your local network by only focusing on what each system should do for you.

Introduction

Let us wind back time a bit, and consider it was the year 2005. Back then when you wanted to setup a network for your office, you would face a truly epic task: that of manually configuring servers. If you where lucky, only a few, on a bad day it might have been dozens.

Back then this meant hideous amounts of planning, documentation, and preparation of configuration files, and of course system preparation, including system installation, and basic configuration to get started.

This meant tons of identical tasks to fulfill, and every single task had to be done by hand. I've been there, and you probably have faint memories of these days, too.

Luckily for us, there is an application for that: Puppet. Puppet is IT automation software that helps system administrators manage infrastructure throughout its life-cycle, from provisioning and configuration to patch management and compliance.

Building

Based on Ubuntu 12.04 Server, we will create a simple server host with KVM enabled.

... a KVM server

sudo apt-get install qemu-kvm libvirt-bin bridge-utils python-vm-builder

We are going to build our packages by bootstrapping Ubuntu with the original packages. While bandwidth may be cheap, we can spare us the time of downloading packages twice by using an apt package caching proxy.

Install apt-cacher-ng by issuing this command:

sudo apt-get install apt-cacher-ng

Once installed edit /etc/apt-cacher-ng/acng.conf and replace the line containing Port:3142 with Port:9999, and fire up our local repository cache by executing

sudo service apt-cacher-ng restart

... a Puppet master

Now, we generate a MAC address by executing

MACADDR="52:54:00:$(dd if=/dev/urandom bs=512 count=1 2>/dev/null | md5sum | sed 's/^\(..\)\(..\)\(..\).*$/\1:\2:\3/')"; echo $MACADDR

Next we create an image using our new MAC address for the Puppet master by issuing the following parameters:

sudo vmbuilder kvm ubuntu -o --libvirt qemu:///system \
  --suite precise --flavour server --arch amd64 -m 512 --cpus=1 \
  --mac=52:54:00:ba:a9:17 --ip=192.168.100.10 --dns=192.168.100.1 \
  --gw=192.168.100.1 \
  --hostname master --domain kogitoapp.rocks \
  --user kogitoapp --pass rocks \
  --addpkg unattended-upgrades --addpkg acpid --addpkg facter \
  --addpkg puppet --addpkg puppetmaster \
  --mirror http://localhost:9999/ubuntu

... a KVM / puppet client

The default host will contain the only the Puppet agent. Again, a new MAC address is created using the command from earlier. Thus we will modify the command as follows:

sudo vmbuilder kvm ubuntu -o --libvirt qemu:///system \
  --suite precise --flavour server --arch amd64 -m 512 --cpus=1 \
  --mac=52:54:00:ca:12:e3 --ip=192.168.100.11 --dns=192.168.100.1 \
  --gw=192.168.100.1 \
  --hostname code --domain kogitoapp.rocks \
  --user kogitoapp --pass rocks \
  --addpkg unattended-upgrades --addpkg acpid --addpkg facter --addpkg puppet \
  --mirror http://localhost:9999/ubuntu

Up next!

Since we now have a Puppet master and a first client running, we will learn how easy we can schedule and deploy system changes to both our client and our server. The good news is: in Puppet, the master server can also be a client. The following list is what came to my mind while writing this, so it surely is not complete.

  • tracking Puppet configuration changes,
  • creating clients with Puppet pre-configured,
  • administering Puppet with Puppet Dashboard or The Foreman.

There are quite a few interesting things which we can achieve with Puppet, and I'll be covering these step by step.

Using Propel as ORM in Symfony2 applications

Symfony 2 is quite awesome, and what I really like is the ability to switch out any component with another component. Ever since I started building applications with Symfony one thing really has bothered me, and that was Doctrine. It just did not feel natural to use it.

The switch

Lucky me, there is an alternative: Propel ORM. Have a look yourself. You can easily migrate existing projects, it does have the features needed to build something, and it has it's very own way of forcing you into database independent development. On top of that I like the schema definitions and fixtures a lot.

Now let us switch the default Symfony project to Propel. First I recommend to install Composer.

Using Composer we create a project template:

$ php composer.phar create-project symfony/framework-standard-edition my-project/ 2.1.6

Next we will remove Doctrine and add Propel to the Composer file. Edit composer.json

-        "doctrine/orm": ">=2.2.3,<2.4-dev",
-        "doctrine/doctrine-bundle": "1.0.*",
+        "propel/propel-bundle": "1.1.*",

With this we have removed the Doctrine ORM and the Symfony bundle. Now we need to edit app/AppKernel.php

-            new Doctrine\Bundle\DoctrineBundle\DoctrineBundle(),
+            new Propel\PropelBundle\PropelBundle(),

The Symfony configuration in app/config/config.yml also needs a bit of love

-# Doctrine Configuration
-doctrine:
-    dbal:
-        driver:   %database_driver%
-        host:     %database_host%
-        port:     %database_port%
-        dbname:   %database_name%
-        user:     %database_user%
-        password: %database_password%
-        charset:  UTF8
-
-    orm:
-        auto_generate_proxy_classes: %kernel.debug%
-        auto_mapping: true
+# Propel Configuration
+propel:
+    dbal:
+        driver:     "%database_driver%"
+        user:       "%database_user%"
+        password:   "%database_password%"
+        dsn:        "%database_driver%:host=%database_host%;dbname=%database_name%;charset=%database_charset%"

Last -- but not least -- we need to edit app/config/parameters.yml:

parameters:
     database_driver:   pdo_mysql
     database_host:     localhost
-    database_port:     ~
     database_name:     symfony
     database_user:     root
     database_password: ~
+    database_charset:  UTF8

Final call

To finalize the switch to Propel, we run Composer once

$ composer update

After a few moments, we are done. You now have a Propel enabled Symfony2 template. The application console now sports a lot of propel commands.

Try the Propel documentation for an overview of the nifty things Propel can do for you.

Free Software and the real Freedom of Choice

I am a friend of Free Software, and in my daily life I heavily depend on it. As such, there are times when I am irritated by the attribute of people who publicly represent the Free Software movement.

It has always been my understanding that Free Software itself had been created to allow people the freedom of choice between proprietary, closed software, and free, open sourced software.

Freedom of Choice

People running Free Software and especially a Linux distribution of their choice can be considered able to make a choice themselves. Obviously by running a non Windows or Mac system they already have some experience which led to installing one of the many distributions.

Personally I have been using Windows and Mac OS for a few years, and for various shortcomings within these, I chose to use Arch Linux, Fedora, openSUSE, and also Ubuntu -- both server and desktop releases.

It is moments such as reading post like the Free Software foundations post on Ubuntu and Spyware which clearly gives me the creeps.

Ubuntu sends that string to one of Canonical's servers. (Canonical is the company that develops Ubuntu.)

I wonder what the big surprise is here. Canoncial is a company, which does invest in the development of Ubuntu and as such, it kind of seems obvious that they would would want some kind of digital feedback on their product.

After all, Ubuntu is a product, and not just a distribution. Thus as a user I kind of expect to see some kind of connection between the freely available Ubuntu releases, and a commercial interest.

The same applies to any other distribution, much like Fedora or openSUSE, just to name a few.

This is just like the first surveillance practice I learned about in Windows. My late friend Fravia told me that when he searched for a string in the files of his Windows system, it sent a packet to some server, which was detected by his firewall.

Now here I really start to wonder, if people actually read what their computer displays to them. Yes, Windows does send back data to Microsoft, and it is to be expected. It is a commercial product, and a sensible approach to verifying your product works, and to verify which parts of your software actually are used and how they are used, is to simply report back to the vendor.

When comparing Ubuntu and Windows in this regard, I can only see one difference that the article provided by the Free Software foundation clearly misses out: Windows actually tells you, that it would like to report usage information to Microsoft, which Ubuntu does not tell you.

The real difference here is asking permission from the user. Still, Ubuntu does at least allow every user to disable sending information to Canonical which from where I stand works for me.

The point is

What really pisses me off here is the reaction posted by the Free Software foundation, because it is by far not a honest reaction.

Free Software, and the movement behind it is driven by business interests. Say what you want, but you will have a hard time to prove it is not.

People and especially engineers are paid to work on Free Software out of the fact that these days providing a free, Open Sourced product is a valid approach to showcasing your companies abilities. And as a result of this presentation, and availability, you actually make money out of service, and extending your offer.

Why is it so hard to just acknowledge the fact of free software, and closed software existing on par? Each provide value to people, and I find it dishonorable to simply get angry on the simple facts of life.

Free Software does exist because there is someone paying for it. Free software does exist because there is a business value behind it, and there are people and companies making a living out of it.

Bashing that is worse to me than connecting free and closed software because in the end in real life, you chose what works for you. You do not use software because it is free or because it is commercial.

You use software because it solves an issue for you.

Dear Free Software foundation,

I want my freedom of choice, and I want to chose an option that simply works for me based upon its' inherent quality.

Being forced to chose only on the state of being free software or closed software, is as bad a just being able to only use closed source software after all.

With kind regards, Daniel S. Reichenbach