Loading Fixtures Inside Functional Tests

Since I last wrote about Winning Side Ministries’ website (the one I was writing a custom CMS for) I’ve changed my mind yet again. Yes, I tend to do that a lot. I’m currently rebuilding the website based on the Symfony2 PHP web framework. It’s based on MVC and presents a nice clean interface to which one can code pretty much anything web-based and has lots of great built in tools for getting the job done.

Annoyed by past experiences with code-first-test-last development flow I’ve decided to try Test Driven Development for this particular project and I’m finding that I really like it. I will admit that there have been times when I’ve had to sit and ponder over how to bring about certain changes by first writing tests, but they’ve all been situations that I’ll run into time and time again so it hasn’t been time lost. One of those places was the first time I needed to begin writing the code to drive a view (aka, the page that will be rendered in the browser). Since this involves combining multiple components, the tests preceding development must by definition be functional tests; of course this in turn requires that we write some data fixtures so we’ll have something to run the tests against.

The Symfony framework by default uses an ORM called Doctrine, and happily enough there is a bundle (Symfony term that more or less equates to a module) called DoctrineFixturesBundle that handles most of the grunt work of maintaining the fixtures for you.  On the documentation page for the bundle it outlines the steps required for writing the fixtures and loading them into the database, though it only shows how to load them manually from the command line.  This puzzled me since one would think that it would be wiser to reload the fixtures each time the functional tests are run so that in the tests we can do all the CRUD on the database we want without worrying about spoiling things for the next run of tests.  That’s when I set off to find out how I could clear the database and load fresh fixtures at the beginning of each test run.

To start off, here is the class I came up with:

Let’s break it down a bit since a few of the constituent pieces of code are significant in their own right.

Base Class

We will be extending the class that has been provided for us in Symfony to facilitate functional tests: WebTestCase. It’s just a subclass of PHPUnit_Framework_TestCase that has a built-in web crawler to allow us to make requests and check the responses. The namespace defined there is where I keep my custom testing classes.

In PHPUnit 3.6.x if we want to call code just before all the tests in a *_TestCase subclass are run, we put that code in a public method called setUpBeforeClass() (as opposed to setUp() which is called before every individual test). Its destructive counterpart is tearDownAfterClass() just as you may expect. These methods will contain our code for loading the fixtures and then wiping them out when we’re done. If we need to use setUpBeforeClass() or tearDownAfterClass() in classes that extend FreshFixtureWebTestCase we have to be sure to call the parent:: version as well or our code here won’t be executed.

Accessing the Entity Manager

To get started, we need to grab a reference to EntityManager. This is pretty common code inside a WebTestCase instance so I’ll not go into the explanation, but here it is. Notice that I’ve assigned our reference to the Container to a static class variable…this is just a convenience for our subclasses and is not required.

Retrieving Entity class meta-data

Now comes the fun stuff. We don’t want to have to worry about the state of the database schema or migrations in the tests, so I like to dump the database and totally reload the schema directly from the Entities before the tests run. To do that we get a reference to the MetaDataFactory from our EntityManager and call its getAllMetaData() method. This returns an object that contains the meta-data for all our Entity classes.

Using SchemaTool to modify the database

The question is what do we with the meta-data once we have it? Symfony comes with a nice helper class called the SchemaTool. There are several different things we can do with this class including drop our current database and create a schema from meta-data.

Creating a CLI application object

The doctrine-fixtures bundle only gives us the ability to load fixtures from the command line. In order to run Symfony CLI commands from PHP code we need to create an Application object. We have to make sure we set autoExit to false or else our entire PHP script will exit as soon as the command has run—this would be rather counterproductive when we’re trying to run tests!

Running the doctrine:fixtures:load command

To actually run our command we create input and output objects and feed them to the Application object. The command to run goes into the input and the result (which would normally be printed to the terminal) will be read from the output. Our command will be in the form of a string so we’ll use StringInput. We want the output of the command in a string as well but unfortunately there is no StringOutput class; there is a however a StreamOutput which we can point to a temporary file. Once the command is done executing we can read the contents of the file back. Note that PHP automatically deletes files created with tmpfile() once the script exits so we don’t have to bother with it.

Checking for errors

If there were any errors the output of the command should contain the word ‘Exception’. I’m just running an assert to make sure we don’t see that in the output, though you may have to use a different method if you have a table or Entity with ‘Exception’ in the name.

Cleaning up

Once all our tests are done we want to clear the database again. This way if we accidentally forget to load something in another test class we’ll know immediately because all queries will fail.

Using FreshFixtureWebTestCase

To use this class we simply extend it the same way we would WebTestCase. All the fixture-loading code will be called automatically without so much as a thought from us, though I do want to reiterate my prior warning:

If we define setUpBeforeClass() or tearDownAfterClass() in a subclass of FreshFixtureWebTestCase, FreshFixtureWebTestCases’s method(s) will be overridden. To keep the fixture functionality we must call parent::[method_name] from within whichever method(s) we override.

Dropping Google Calendar – Webdav to the rescue

There has been a lot of uproar recently about Google’s new (some say lack of) privacy policy.  I’m not really going into the details of that here because frankly, it doesn’t have much to do with my personal decision to move away from their services.  For the last few years I’ve been nervous about how large and monopolistic Google has become and I’ve had the idea in the back of my mind that I might not want to stick with them for much longer; the new privacy policy announcement was really just the little shove I needed to get started with the process.

At the shop (aside from email, of course) the biggest potential problem for us is calendaring.  We’ve been very happy with how nicely Google’s calendars work in our ability to share them with each other and view them on various devices like Android phones.  I knew that to replace that I would need equivalent functionality or we’d really be moving backwards in our scheduling workflow.

I tried a couple of products with limited success:

  • DaviCal – I couldn’t really even connect properly to this in the first place even after reading several howto’s
  • CalendarServer (also known as Darwin Calendar Server) – it would initially, but seemed to flake out a lot (suddenly not serving pages in the web based interface, etc.)

Then I realized that pretty much any client that supports third party calendars can handle CalDav/WebDav.  This seemed to be my golden ticket, as WebDav is pretty easy to set up and is widely used—meaning that show-stopping bugs aren’t likely to live long.

At this point it is actually not a simple thing to use WebDav or anything else besides the Google Calendar on Android phones, so there may be a future post on that. Since this is the case no matter which method we choose it’s not really a deciding factor here.

What is WebDav?  The simplest explanation is that it’s a protocol that allows read/write access to files over a network as an extension to the HTTP.  As far as I can tell by reading online (though I haven’t seen much on the topic without reading RFC’s), the only difference between CalDav and normal WebDav is that it minimizes the amount of data that must be transferred during a calendar sync.

The Apache webserver has a widely supported module called mod_dav that adds WebDav support and that’s what I chose to use.  There is a mod_caldav module but it doesn’t look like it’s kept up very well.  As of this writing it hasn’t been modified since March of 2010, nearly two years ago.

On our CentOS server, things were pretty simple once I got through it.  I found a few hotwo’s, none of them complete.  Here’s what I did:

  1. Our virtual hosts are set up by adding VHOST_hostname.conf files to /etc/httpd/conf.d/ so I added VHOST_webdav.conf like so (the mod_dav module was already enabled for us, but you can enable it in /etc/httpd/conf/httpd.conf if not):
    <IfModule mod_dav.c>
        DavLockDB webvar/DavLock
        Alias /webdav "/var/www/webdav"
        <Directory /var/www/webdav>
            Dav On
            Options +Indexes
            IndexOptions FancyIndexing
            AddDefaultCharset UTF-8
            AuthType Basic
            AuthName "Alltech WebDAV Server"
            AuthUserFile /etc/httpd/webdav.users.pwd
            Require valid-user
            Order allow,deny
            Allow from all
        </Directory>
     </IfModule>
  2. Now you notice that line that says “DavLockDB webvar/DavLock”?  That means that mod_dav will create it’s lock file in {ServerRoot}/webvar with the basename DavLock for the files.  In our case {ServerRoot} is /etc/httpd, so we need to create that directory and give apache write permission for it:
    mkdir /etc/httpd/webvar
    chown apache:apache /etc/httpd/webvar
  3. We’ll also need to create the folder that our Directory container referenced (which is where our files will live):
    mkdir /var/www/webdav
    chown apache:apache /var/www/webdav
  4. Of course we’ll need to create a standard htpasswd file for the users and passwords.  According to our config, we want to create it thusly:
    htpasswd -c /etc/httpd/webdav.users.pwd myusername
    It will prompt you for a password for the user “myusername”.  To add more users, just leave off the “-c” parameter and substitute the new user name.
  5. Now restart apache:
    service httpd restart
  6. You can test it by using the “cadaver” command-line webdav client:
    cadaver http://servername/webdav
    If you don’t get errors and you can do a directory listing with ‘ls’, you win.

Why uid != username

Due to various problems with Ubuntu Server, I’m migrating our server at the shop to CentOS.  Part of that migration included copying over the chroot environments that we use for PXE booting.  When it comes to copying large amounts of data over the network, especially when the permissions and ownership of that data is crucial, rsync is always my goto tool.  It can use compression to save network bandwidth, uses delta copies for files that are already copied but have changes, and has options to delete files from the destination that don’t exist in the source.  On the whole, rsync is pretty easy to use as well and the man page has EXCELLENT examples.  Here’s a simplified version of what I was using (the real one uses ssh tricks to get rsync to run as sudo on the other end):

rsync -avz 192.16 8.1.14:/nfsrootmav/ /nfsrootmav/

Looks good, right?  What this tells rsync to do is copy recursively and maintain ownership/permissions (-a), use verbose output messages (-v), and use compression during the copy (-z).  I ran this command and all my files copied happily and without complaint.  When I tried to test the copied chroot environment, however, it wasn’t quite right.  When I tried to shut the computer down (booted to the chroot over PXE) it logged out of Gnome instead.  The shutdown command in GDM didn’t work either.

I knew that the only difference between the original and the copy is that the copy…had been copied.  This lead me in the direction of permissions, though I couldn’t understand why they’d be different if rsync was running in archive mode.  I tried running diff on `ls` dumps of the original vs. the copied directories, but I never really got the outputs on the different servers right for the diff to tell me everything I wanted to know.  What I did glean from my diff was that if I looked at the uid’s and gid’s of the files numerically, the original and copy had different ownerships on some files!

What?  How could that be?  Then it occurred to me…I had copied the chroot from the host OS, not from within the chroot.  It turns out that by default rsync copies the user and group owners of a file by name, not uid and gid.  That makes sense, doesn’t it?  If I have a file that’s owned by, say, “messagebus” on one computer and I copy it to another, I want it to be owned by “messagebus” on the other computer too even if the “messagebus” user has a different numeric uid.  It went wrong here because files are stored with a numeric uid/gid, and any time an operation requests or sets the text user/group then they are simply looked up in /etc/passwd or /etc/group.  The files in my chroot had users that matched uids in it’s own /etc/passwd, but rsync was referencing the host OS’s /etc/passwd.

The solution?  rysnc has an option called “–numeric-ids”.  This will cause rsync to use the numeric uid’s and gid’s directly rather than trying to look up the plain text names.  I added that and my files copied properly.  The PXE-booted computer shutdown as it should and all was well with the world.

I actually never would have had this problem if I’d chrooted into my /nfsrootmav directory and rsynced out from there because rsync would have been looking at the right /etc/passwd and /etc/group, but then I never would have learned this great lesson.

Nerd music + microprocessor = synthesized fun!

A few weeks ago my wife and I went to Ohio to see some of her family.  During the trip she, her mom, and her sister wanted to go to the local Goodwill store to find some clothes.  This wasn’t an incredibly promising trip for me but I do like to browse through the old electronics; you never know what gems you might find.  Lo and behold, they had an old microprocessor trainer kit for less than $10 laying on a shelf in the original—though yellowed—cardboard box.  It made the whole shopping-with-three-girls thing worth it.

I haven’t had a chance to mess with it until today but I very quickly learned from the manual (yes, it still has it!) how to work the internal synthesizer to both play music on the fly and program it in for computer playback.  It was pretty hard to just play it like a piano since the notes weren’t laid out very nicely, so I wanted to come up with a good, short, geeky song or riff to test it out with.  What could be better than “Axel F”?  Perhaps “Still Alive”, but that’s a song for another day I suppose.  That one would be near impossible though…I don’t see any way of programming adding a rest of any kind and “Still Alive” has a lot of them.
EDIT: I did find out how to program rests into a song; I just wasn’t reading far enough ahead. I updated the video to one that includes the rests in Axel-F, and maybe I’ll try Still Alive in the future.

Many thanks to my lovely and talented wife Christina for finding the sheet music online and converting it to solfège for me.  The trainer uses solfège instead of the typical A-G notes.

OpenSUSE on the home Computer

Because of a very irritating (unsolved) problem I had with Ubuntu Lucid on my home computer I decided to try a different distro for a change. I’ve been all over the distro map since I started toying with Linux in 2003 or so. I started out with Red Hat 9, then went to Slackware (10 at the time), Gentoo, Sabayon, then Ubuntu. I also briefly messed with some RHEL clone or other several years back (not CentOS…I don’t remember what it was..star something). Each had its benefits and its drawbacks but I’ve been on Ubuntu for the longest of any of the distros I’ve used, mainly because of how easy it is. Years ago I laughed at Ubuntu because I thought it was “the n00b distro” but I’ve really learned to respect the power of ease-of-use: if I’m not spending 3 hours getting X to work the way I want, I can be spending that time on more important things.

In my journey with Ubuntu and its lovely works-out-of-the-box(usually)-ness I have learned an important lesson, though. That lesson is that when you make something very easy to use you by nature are likely to sacrifice some stability. Usually that’s not a problem but in my case I sort of hit a productivity barrier since large disk reads would pretty much lock my computer up. By that I knew it was time to break out my rusty skills and go to something one degree less easy and one degree more stable.

I spent a few days deciding what that distro would be and I knew I wanted to try something I’d never used before. My first choice was Debian (the mother of Ubuntu) but there was a bug with the latest version (I don’t remember whether it was testing or stable) that kept it from even booting properly on my machine. My thoughts then drifted to Fedora. I’d used Red Hat 9 years ago as I mentioned previously and was eventually frustrated by the fact that Red Hat felt lead to do everything different from the way everyone else in the Linux community did it; these days though there’s so much documentation available online that the differences aren’t so hampering. Besides that, I’ve been getting the feeling that over time the differences have smoothed out a bit.

Powered By openSUSEStill, there was my desire to try something totally new. That nagging urge is what eventually steered me to openSUSE. I knew it would be a good distro to learn since it’s one used a lot in industry; plus since it’s RPM based I knew I shouldn’t have an issue finding pre-built packages for any widely used application. Having made my decision, I downloaded the 11.4 ISO and loaded it up.

The installer was a nice, pretty, graphical one…not quite as simple as the Ubuntu installer but definitely manageable for someone who’d be using Linux for a while. This is one of the things I expected in the trade from easy to stable. The first thing about it that really impressed me was the fact that I didn’t have to reboot after installing. I was able to dive right in and used the installed version…I had never seen that before.

The layout of the desktop (Gnome, btw) took a little adjusting to get it the way I wanted it but that’s normal for me. I really liked the default theme and I’m still using it. It’s got dark grey panels and dark green titlebars but light window backgrounds and mostly light controls. It’s a pretty good mix of light and dark; I don’t like all dark themes. The default background is a rotating set of backgrounds that vary from very light green to darker green, seemingly following the time of day. All this is very refreshing to me after wanting to gouge my eyes out with my titanium spork every time I load a new version of Ubuntu and have to look at the default theme.

The fonts didn’t look very nice at first and a quick web search revealed that this was because openSUSE opted to disable sub-pixel hinting by default to avoid being victim to a M$ patent suit relating to ClearType. Everything is explained in this very nice article describing how to install the version of freetype with sub-pixel hinting enabled.

I also needed an article to learn how to install the proprietary NVidia drivers, and found it pretty easily as well. The nice thing about that article was that it included a special “one-click” install link that took me right to the software installation program and loaded the right package for me; I think I’ve seen “deb://” style links around the net that do that for Debian-based distros but I can’t find any right now.

Software installation is a good bit more complex on openSUSE; several times it would prompt me during updates or new installations asking question I didn’t exactly know how to answer. I can’t think of specific examples right now but it was usually a situation where some potential package conflict could occur and it wanted to know what I wanted to do. Usually there was an option to let it take care of it automatically and I just chose that. I found also that I had to add several repos to openSUSE’s software sources before I could get certain packages, some of them rather common. The biggest and most important of these is the packman repo which contains a lot of community builds for different software packages. Besides that, the videolan repository was also good for nicer versions of ffmpeg, mplayer, etc. Both of these are called community repositories; this is basically openSUSE’s version of a PPA. Since they are officially recognized by openSUSE, adding them is very easy:

  1. Go to Yast → Software → Software Repositories
  2. Click “Add” (near the bottom)
  3. Choose “Community Repositories” and click Next
  4. Now simply select the repos you want to add and click OK
  5. Accept any offers to import GPG/signing keys

I still find myself having to look things up every now and again but familiarity is really starting to set in. I do miss the lightning-fast boot times made possible by ureadahead but besides that I don’t really look back too much. I still use Ubuntu on my work computer since it runs fine on it, so I’m not totally abandoning Ubuntu; I’ve just added another wrench to my toolbelt.

PHP Code Formatting with sed

When I first started the Winning Side Ministries I had one code style…now I prefer another. It happens, right? It shouldn’t be that big of a deal; there are plenty of free tools available that will iterate through source code files and alter the formatting for you. What I’ve found in the last few days though is that not many of them really work all that well.

Our website is written in PHP so naturally I began looking for a solution that was tailored specifically to that language. There are only a few of them and I really couldn’t get any of them working in a satisfactory way. At that point I began looking at more generic solutions like GNU indent which is actually targeted at C but works OK with other languages that use C-style syntax. After a bit of toying around I got indent to make the changes I wanted it to make but it was also making other changes that I didn’t want it to make.

That’s when I realized…there were really only 2 formatting styles I wanted to change. Why not just fix it with some sort of regex substitution program? Enter sed (StreamEDitor). It’s really a nifty little program that does one thing really well (per the Unix philosophy) and that is take the text it’s given and perform the requested modifications. Now it’s really meant as a line based editor and some of my changes required multi-line substitution but it wasn’t really that hard to work around after googling for a bit.

Here’s the code style I wanted to convert from:

function myFunc($arg)
{
    print("some stuff");
    if($arg == 2)
    {
        print("Arg is 2!");
    }
}




function myFunc2()
{
    print("I'm func 2!");
}

And here’s the style I wanted to convert to:

function myFunc($arg) {
    print("some stuff");
    if($arg == 2) {
        print("Arg is 2!");
    }
}

function myFunc2() {
    print("I'm func 2!");
}

So we’re going from having 4 blank lines between functions to only 1, and going from having opening braces on a separate line to having them on the same line preceded by a space. Here’s the sed script I came up with:

# this line reads the whole file in so we can do multiline substitution
:a;N;$!ba;

# this gets rid of our 5 line breaks between functions
s/\n\n\n\n\n/\n\n/g

# this puts curly braces on the same line with the control structures/function definitions to which they belong
# be aware that if there is a set of lines like this:
#
# if(myTest) //here's some documentation
# {
#     next line of code
# ...
#
# the curly brace will end up in the end of the comment; watch out for that
s/\n[ ]*{/ {/g

That first line…I copied and pasted that from this answer to this stackoverflow question. I don’t know sed well enough to really explain it, but there’s some explanation in the answer comments there so check it out if you want to know more.  The basic idea is that sed can’t work with multiple newlines at once without a workaround since it expects to act upon every line individually.  The tr solution also proposed on that same question won’t work for me since tr works character by character and doesn’t support full regex’s as a result.

I did run into the problem described in the script’s comment, but only once. It wasn’t worth complicating the script to fix that though it may not be difficult. I don’t know…I didn’t bother.

To run the script you just call sed like so:

sed -i -f format_code.sed *.php

And you’re done! The “-i” makes it edit the files in place so you don’t have to do any kind of weird redirection or write a script around it. It just works.

New server at the shop

It has been a long time since I’ve written anything, I know. I’ve been so busy recently with several different things; I’ve wanted to write about all of it but I just haven’t had the time. Maybe I’ll cover a little bit more of it soon but right now I wanted to talk about an interesting problem we had with the new server at the computer shop.

On the hardware side it’s a Dell PowerEdge 1950 with a quad-core Xeon 2.33GHz CPU, 4 GB RAM, and a mirrored 500 GB RAID array. It’s not what you’d call cutting edge in the data-center but it’s a far cry from the dorky little single-core Pentium 4 thing we had been using.

Software-wise it’s running Ubuntu 10.04 Server which is taking care of our file-server needs along with housing our squid caching proxy and PXE boot environments. Since this server is equipped with dual gigabit LAN we bit the bullet and bought a couple of gigabit switches as well, which has really made a difference in Quickbooks’ performance alone.

For various reasons (not least of which is Quickbooks) all of our workstations in the shop but mine are running Windows which means file sharing must be done over SMB using Samba. I set everything up as usual with mostly defaults in my smb.conf. The shares were added as usershares with full write access by everyone for the sake of simplicity; that shouldn’t be a problem since iptables is blocking all traffic originating from the WAN port.

Since everyone was so excited about the gigabit stuff, my boss fired up a couple of file transfers and began listening to mp3 files over SMB just to see what kind of performance we could get. The funny thing was that every so often the music would just hang for no reason for 20-30 seconds at a time. During the hang his computer wouldn’t be able to browse the file shares at all and all file transfers would hang as well. I checked the system logs on his computer and dmesg on the server and neither mentioned anything related to the problem. Finally I found the Samba session logs (these are in /var/log/samba on Ubuntu) which are unique per client and saw lines in it like this:

"Unable to connect to CUPS server localhost:631 - Connection timed out"

Timed out? Yeah, that sounds like it. A bit more investigation led me to realize the timing of the pauses and the errors seemed to correlate. Searching for that error on the Internet was very difficult, though; it seemed like everybody with a Samba client log posted on the web had that error and nearly all the posts I found we’re asking about unrelated issues.

Disgusted with searching for a bit, I tried the obvious solution of commenting out all printer related lines in smb.conf. No luck…apparently printing support is enabled by default even if it isn’t configured. Back to searching.

Finally I found one guy who was complaining about a big lag during log-in on password protected shares and was getting that same CUPS error. He figured out how to get Samba to stop trying to connect to CUPS using these configuration lines. He admits that they might not all be needed, but it doesn’t hurt my feelings to smash a bug a little harder than necessary:

load printers = no
printing = bsd
printcap name = /dev/null
disable spoolss = yes

Around the same time I also found that many people were saying that this particular configuration tweak was supposed to give better throughput when serving Windows clients:

socket options = TCP_NODELAY

I shamelessly applied both fixes at once.  At the time I just really needed it to work (we’d already moved our Quickbooks company file) and didn’t care too much which one fixed it so long as it was fixed.  And it was.

No one really mentioned the socket options tweak affecting any sort of hangs or delays so I’m certain that disabling print services in Samba was the real kicker.  Apparently there are very few people who set up Samba on a server without CUPS, but we have no need for a print server like that.  We don’t have a big copier or anything; our only printer is a small black and white laser at the front desk which is 30 feet and several walls away from our server.

So we have another problem solved and I feel pretty good about it. It’s nice to find the solution to a tough problem and I thank the good Lord for helping me.