Working out the container-classes API

During AusHac, I worked on the container hierarchy I discussed in my previous post, which culminated in the initial release of container-classes. I had initially (and naively) thought I would have been able to whip something like this together on the Friday afternoon and spend the rest of the weekend working on graph libraries; in the end I just managed to release an initial draft version before we had to pack up on Sunday.

Now, I’m not saying this current setup is perfect; it’s basically a direct copy of all list-oriented functions from the Prelude along with a couple of functions from Data.List split into a generic Container class, a Sequence class for containers with a linear structure and Stream for infinite Sequences (i.e. lists and similar structures: those for which it makes sense to define a function like repeat).

First of all, here are a couple of design decisions I made with this library:

  • I want to be able to consider types with kind *; as such, most pre-existing classes are of no use.
  • Even when hacking together support for types of kind * -> * for mapping functions, etc. I couldn’t use Functor as it doesn’t let you constrain the type of the values being stored (for Sets, etc.).
  • To be able to have restrictions, we need to be able to specify the value type as part of the class definition. This means the usage of either MPTCs+fundeps or an Associated Type. I was initially using the latter, but due to the current lack of superclass constraints making the type signatures much uglier and longer, I switched to using MPTCs+fundeps instead.
  • Type signatures should be as short/nice as possible.
  • Provide as many default implementations as possible, and make those as efficient as possible.

However, with these design decisions there are some considerations I have to make:

  • How should I split up the various functions into type-classes? e.g. does it make sense to re-define standard classes like Foldable so that they’ll work with values of kind * (where possible) and if necessary have a constrained value type?
  • At the moment, the main constraints are all inherited from the Container class; if I have lots of smaller classes, is there a nicer way of abstracting out the constraint without duplicating it everywhere? rmonad has the Suitable, but in practice this seems to mean the addition of extra Suitable f a constraints on every function in addition; maybe this is because Suitable isn’t a superclass of the other classes though.
  • I’ve tried to define the default definitions of the various class methods with the eventual goal of implementing and using the foldr/build rule, but I’m not sure how to properly implement such a rule, let alone how well it’s going to work in practice:
    • If someone overrides the defaults to use custom/optimised versions (e.g. all the pre-defined list functions), then the inter-datatype optimisations will no longer be present.
    • As people may use more optimised variants of various class methods, any data type that extends another (using a newtype, etc.) will have to explicitly define each class instance rather than relying on the default definitions (if they want to keep using the optimised variants).
    • Cross-container optimisation could change some fundamental assumptions: e.g. going from a list to a Seq and then back to a list will typically preserve the value ordering; however if we replace the Seq with a Set then we’d expect the ordering in the final list to have changed (and be sorted); if I implement the foldr/build rule would it interfere with this ordering by removing the intermediate Set and the fact that it will insert values in sorted order?
  • Benchmarking: is there a nice way of doing per-class benchmarking to be able to compare the performance of different data structures? For example, being able to compare how long it takes to insert 100 random values into a set (by consing) compared to inserting those same values into a Set.

So, that seems to be the battle I’ve taken upon myself. I’d greatly appreciate any pointers people can give me either as comments here or by emailing me.

Oh, and a reminder: I’m going to stop collecting responses for my survey on what to call the “new FGL” library at about 12 PM UTC this Friday. I’ve already got about 60 votes; more are welcome.

Graphs, Haskell

Data-Oriented Hierarchies

In the Haskell community, there are several topics of discussion that keep coming up over and over again in terms of dealing with the hierarchies in our code. Some of these topics are:

  • Fixing the FunctorApplicativeMonad class hierarchy (however you want to structure it);
  • The best way to define and use monad transformers;
  • Making Functor more relevant; taken to the extreme by the “Caleskell” definitions used by lambdabot on IRC, where it seems almost everything can be expressed in terms of fmap.

Now, I think this kind of discussion is an indication of good health in the Haskell community where we are doing our best to determine what the optimal solution to these problems are (rather than just giving up or being dictated to by a single individual). However, something I’ve come to realise recently is that in my understanding these discussions are mainly oriented at what the best way to abstract how we write code rather than how we use the data structures that make up the code. Hence, the topic of this blog post.

My Goal

What I want to discuss here is the concept of how we can best define class hierarchies that let us easily interchange our data structures. The purpose of this is that currently, if I write some code using a list as my underlying data structure and then decide that a Sequence would be a better fit because I do a lot of appends, I have to re-write every single bit of my code that knows about that particular data structure. However, I would much prefer to just have to change a few top-level type signatures and maybe some list-specific items in my code and then the magic of type classes would take care of the rest.

Avoiding Duplication

The main focus of when such a hierarchy would be useful is when writing libraries: duplication is avoided by having to write a list-specific, a Sequence-specific and a Set-specific version of a function (e.g. to test if the data structure in question has at least two of the provided values). More than that: often times we are constrained in terms of how we use libraries by what data-type the library author preferred at the time of writing. A library function may require and then return a list, whereas we’re using Sets everywhere else. If there is no pressing reason to use a list rather than a Set, then why should it?

Is such a hierarchy already available?

There are some previous attempts at something like this, including (but not limited to):

  • Functor + Foldable + Traversable; this approach can’t deal with structures such as Sets as they require an extra restriction on the parametric type.
  • Edison can cope with Set, etc. and has a nice hierarchy between the individual sub-classes (if anything it has too many sub-classes), but is used by very few packages and has what I consider to be a few warts, such as explicitly re-exporting the data types in question in new modules, and some methods (such as strict) that really belong elsewhere.
  • collections seemed to have been another attempt at this, but never seemed to have built on any version of GHC since 6.8.
  • When you only want to consider structures with a linear structure, ListLike is available. However, it seems to be possibly over-busy.
  • Even more specialised than ListLike is IsString, the point of which is to be able to use string literals in Haskell code to define Bytestrings, etc.

The closest viable class/library to my ideal listed above would be a cross between Edison and ListLike; the former has an actual class hierarchy (to avoid duplication, etc.;) whereas the latter seems to be used more in actual practice.

My point here about a class hierarchy is this: in most aspects, any sequence (or “ListLike” data structure) can be considered a really inefficient generic collection/set: you still want to have a function to test for membership, you want to be able to add values, to know how many there are, etc. As such, definitions should be as high up in the hierarchy as possible to let functions that use them be as generic as possible in terms of their type signatures.

The Joker in the deck

There is one conflicting issue in any such hierarchy: mapping.

Ideally, we wouldn’t want to require that instances of these types have kind * -> * (so that we can for instance [pun not intended] make Bytestring an instance of these classes with a “value type” of Word8). However, as soon as we do that we can no longer specify a map function nicely.

ListLike gets around this by defining a map function that doesn’t constrain the data structure type. This means that it’s possible to write map succ with a type of ByteString -> [Word8]. Whilst this might be handy at times, it also provides possible type-matching problems if your overall definition when using them doesn’t force them to be the same (e.g.: print $ map (*2) [1,2,3,4]), and that in essence this definition of map does a complete fold over the data structure, whereas there may be more efficient versions if we can somehow specify at the type level that it must be constrained to the same data structure.

However, the problem is that technically [Int] and [Char] are two completely separate data types; as such, any map between them will require going from one type to another (since we’re not assuming kind * -> * here). It is possible to get around this, but it’s pretty ugly:

{-# LANGUAGE MultiParamTypeClasses, FunctionalDependencies, FlexibleInstances #-}

class Collection c a | c -> a where
  cons :: a -> c -> c

class (Collection (c a) a) => MappableCollection c a where
  cmap :: (MappableCollection c b) => (a -> b) -> c a -> c b

instance Collection [a] a where
  cons = (:)

instance MappableCollection [] a where
  cmap = map

In essence, the whole point of the MappableCollection class is to force the Collection instance back into having to kind * -> *. It might be better just having Collection use ListLike’s rigidMap and then leave “normal” mapping up to Functor or RFunctor (which works better with the whole “class hierarchy” concept). It’s just a shame that there’s no way of having mapping work regardless of the kind of the data type.

So, what are you going to do about this?

I’m going to make a stab at yet-another-collection-class-hierarchy this weekend at AusHac. I’m not sure how far we’ll get, but I’ll see.

Graph Hierarchies

My interest in data structure hierarchies came out my frustration at the lack of a common reference point for graph data types. Developing a base hierarchy is going to be my main focus at AusHac (the collections classes are aimed at being used within this graph library). My current plans look something like this (note that this doesn’t include extra packages providing specific instances, such as “vector-graph” or something):

That is, the actual “graph” library will also cater for other graph-like data structures, such as Cabal’s PackageIndex type. FGL (both the old and the “new” version, whatever it’ll be called) will then extend these classes to provide the notion of inductive graphs; anything that isn’t directly related to the notion of inductive graphs will be shifted down to this notion of “generic graphs”.

In terms of terminology, to ease the transition I’m probably going to stick to current FGL-nomenclature for the most part (unless there’s something horribly wrong/bad about it). So we’re still going to talk about Nodes rather than Vertices, etc.


As I intimated in the extended announcement for fgl-, apart from bug-fixes we’re not going to work on the current 5.4 branch. The 5.5 branch will be developed so as to use the generic graph classes once I’ve got them sorted out, and then that will probably be the end of it.


Now, this has become a rather hot topic: should a rewrite of FGL still be called FGL? I’ve covered this earlier, but I have now created a survey to try and find out what the community thinks it should be called (I did want an “other” option in the first drop-down menu, but Google Docs wouldn’t let me 😦 ).

Graphs, Haskell

Pre-announce for the new FGL

This post is serving as a heads-up for those people that do not read the Haskell mailing lists, as well as an overall summary/history.

Edit: In the comments below, Edward Kmett points out that in some cases having a class where you know that the kind of the graphs is * -> * -> * may be beneficial (and demonstrated to me in #haskell that my counter-example doesn’t quite work). As such, I’ve sent a new email discussing a possible alternative class layout to keep using these kinds of graphs which may make the upgrade process a little easier.

FGL in new hands

About six weeks ago, Martin Erwig announced that he was giving his Functional Graph Library (aka FGL) up for adoption. Thomas Bereknyei and I volunteered to take over maintainership and have been working on it ever since.

It’s FGL, but not as you know it

Thomas and I didn’t want to merely do bug-fixes however. There were a few changes we wanted to make to the FGL API, the major points of which are:

  1. Provide greater scope for customisation/optimisation of instances
  2. Remove the restriction that graphs are of kind * -> * -> *
  3. Allow instance writers to restrict the types of the node and edge labels
  4. Allow instance writers to choose a custom (i.e. not just Int) type for the node index type
  5. Remove the (in our opinion) over-usage of 3-tuples and 4-tuples and define explicit data types for Context, etc. (with record functions) to make it easier to deal with them without requiring explicit pattern matching, etc.
  6. Proper Eq, Show and Read instances for graphs.

Whilst doing all this, however, we’ve also kept to the “spirit” of what makes FGL unique: the notion of inductive graphs. As much as possible, where it makes sense we’ve also kept the names and terminology of the current version to make the transition less seamless.

It’s not that simple, however

Primarily because of reasons 2 – 4 above, the new version of FGL we’re working on is not backwards compatible. For starters, if we don’t require kind * -> * -> * then any code that is currently written using the old classes that required that won’t work.

However, even if we kept that restriction and just wanted the ability to restrict the label types, then we’d have to have some way of making those label types part of the overall type class[es]. As such, we would need some extensions, and the two possible candidates are:

Now, MPTCs + fundeps have the advantage of being older, having more infrastructure built around them, etc. whereas there are still many limitations for Type Families (the inability to have superclass constraints, no automatic deriving for types or aliases using Type Families, etc.). However, we feel that code-wise, using Type Families result in much nicer and cleaner type signatures.

Let us consider a few examples: what does the instance definition of a graph like the one currently found in Data.Graph.Inductive.Tree look like, and what would the type signature be for a function that takes a graph and returns a list of all of the edge labels?

First of all if we defined with MPTCs + fundeps:

// Note the explicit double mention of the two label types here,
// this is needed because if we want to remove the kind restrictions
// on graphs, the first parameter must be the _entire_ graph type.
instance Graph (Gr a b) a b where

edgeLabels :: (Graph g a b) => g -> [b]
edgeLabels = ....

Now with Type Families:

instance Graph (Gr a b) where
    type NodeLabel (Gr a b) = a
    type EdgeLabel (Gr a b) = b

edgeLabels :: (Graph g) => g -> [EdgeLabel g]
edgeLabels = ...

Now, whilst we need to explicitly double-up the two label types in the instance definition for the MPTC-based solution, this isn’t that bad since most users won’t see this. However, it isn’t immediately obvious just from the type signature of edgeLabels what it actually does.

As for the Type Family solution, it is more verbose but arguably the types are easier to read. Furthermore, why should the two label types be first-class members of the type definition? “g a b” isn’t the graph type, just “g” on its own is.

The situation becomes a bit more tiresome when we consider the fourth improvement we wanted to make: the ability to specify a custom node index type. We’ll now use some graph type that has kind * -> * -> * -> * where the first type parameter is the vertex type (i.e. some generic graph type that people can use with any index type they wish, preferably by newtype-ing it).

With MPTCs + fundeps:

instance Graph (Gr n a b) n a b where

edgeLabels :: (Graph g n a b) => g -> [b]
edgeLabels = ....

Now with Type Families:

instance Graph (Gr n a b) where
    type Node (Gr n a b) = n
    type NodeLabel (Gr n a b) = a
    type EdgeLabel (Gr n a b) = b

edgeLabels :: (Graph g) => g -> [EdgeLabel g]
edgeLabels = ...

Notice that the type signature for edgeLabels when using Type Family solution remains unchanged, and why shouldn’t it? We feel that with Type Families we can better express what a function does with a minimum of clutter/boiler-plate.

Now, you may feel that still, MPTCs are preferably if only because they’re less experimental. Either way, we would still need extensions and hence there would be API breakage.

What does that mean for us?

At the moment, very little. We plan to release a “technology preview” release of the new version of FGL within the next few weeks (i.e. whenever we get around to cleaning our current code up and adding documentation). However, we don’t expect (and in fact highly discourage) anyone using the new version when we do so: we plan on releasing these solely to obtain comments and opinions from the community. We realise that FGL is a respected and honoured member of the Haskell library community, and don’t want to make drastic changes without obtaining the community’s input on what would be the best way to proceed.

But there is one thing that should be done by anyone who maintains Haskell software that uses FGL: Restrict the upper bound of the Haskell version used (either by explicitly using fgl == or in case you think we’ll release a bug-fix version then either fgl == 5.4.* or fgl >= && < 5.5 if your code uses Data.Graph.Inductive.PatriciaTree). This is something you should be doing anyway as FGL follows (or at least will be following from now on if it hasn’t been doing so fully up until now) the Package Versioning Policy and as such, you should specify the version bounds to those that you know contain the API that you need.

The naming controversy

When I stated on the Haskell mailing lists that people should ensure they have proper upper bounds on the version of FGL their packages use, some people stated their opinion that we should not in fact be changing the API of FGL and if we wish to do so then we should fork it and give it a new name.

Now, I’m not going to repeat all of my arguments again (not forgetting this one) here. However, this entire issue has sparked off a discussion on when large-scale API breakage in a major library is appropriate; this resulted in this wiki-page which uses the case of FGL to try and determine guidelines on when this would be appropriate.

Now, if it’s the overall community’s opinion that it should be renamed, we of course will. However, to truly solve this problem we would also need to think of new module names as well, and I for one can’t really think of anything more appropriate than what FGL currently uses of Data.Graph.Inductive :s.

What is the bottom line?

The “tl;dr” version of this is that we’re working on a new version/successor to FGL; we plan to add lots of cool features at the expense of requiring some extensions; you should check the dependencies of any software you have that depends upon FGL now (since it’s good practice to do so anyway) and that there’s controversy over whether or not we should re-write the API and still call it FGL.

Gentoo, Haskell, Rants

Repeat after me: “Cabal is not a Package Manager”

It seems that every few weeks someone (who has usually either just started to use Haskell or doesn’t seem to be active in the Haskell community) comes out with the fallacy that Cabal is the “Haskell package manager”. I’ve gotten sick of replying why this isn’t the case on IRC, blog comments, etc. that I’ve decided to write a blog post to set things straight.

Disclaimer: I am not a Cabal developer (I did submit a patch or two to fix a couple of documentation problems I found, but I don’t know if they were applied or if they were re-written, etc.) so this is not the official line, just my own opinion (hmmm… “IANACD” doesn’t quite trip off the tongue…).

Cabal /= cabal-install

First of all, there is the common misconception that Cabal provides the command line tool cabal. This is not the case: this tool is provided in the cabal-install package. This unfortunately (for comprehension’s sake) named tool is completely distinct from Cabal-the-library except for the fact that it uses that library. To help understand why there is this distinction (as opposed to other language-specific installation tools such as RubyGems), I highly recommend this video by Duncan Coutts. The short version is: Cabal depends directly on Haskell compilers and nothing else (and is indeed shipped with GHC); cabal-install needs other packages (for network access, etc.) as well.

For the rest of this blog post, I will assume that people are instead referring to cabal-install as a package manager (which is what most of them intend) or else the entire Hackage ecosystem (see below).

What is a package manager?

According to that most authoritative source Wikipedia, a Package management system (of which a package manager is merely one component, namely the tool that is used) is:

… a collection of tools to automate the process of installing, upgrading, configuring, and removing software packages from a computer.

Coupled with such a tool is also a large (well, it could be small but that kind of defeats the point) collection of packages used by the package manager. Together, these provide a way of (hopefully) seamlessly installing packages without end users having to consider what is needed to do so. For example, let us assume that a Gentoo user with no other Haskell-related software present is wishing to install XMonad (one of the most popular pieces of software written in Haskell). In that case, all they need to do is:

emerge xmonad

This will bring in GHC and all other related dependencies (even non-Haskell ones!) so that the user doesn’t have to worry about all that kind of stuff.

The Haskell “Package Management System”

The “equivalent” to a package management system for Haskell is the Hackage ecosystem: HackageDB + Cabal + cabal-install, which provide the list of packages, the build system and the command line tool respectively. However, this ecosystem is not a package management system:


HackageDB is the central repository of open-source Haskell software. However, it is limited solely to Haskell software that is installed using Cabal. As such, it is not closed under the dependency relation: entering the incantation cabal install xmonad into a prompt on a Haskell-free system will not bring in first GHC and then all other dependencies (actually, it is not even possible to utter that incantation since neither Cabal nor cabal-install will be available, unless a pre-built cabal-install is being used). Furthermore, for any GUI libraries/applications that use the Gtk2Hs wrapper library around Gtk+, then Hackage will also be unable to install those packages (to the great confusion of many who state that they do indeed have gtk+ installed, when Cabal and cabal-install are complaining about the gtk+ sub-library from Gtk2Hs) since it isn’t Cabalised.

As such, HackageDB cannot really fulfill the requirements of being a proper component of a package management system.


Cabal is the Common Architecture for Building Applications and Libraries, which not only acts as the build system (analogous to ./configure && make && make install) of choice for Haskell packages, but also provides this metadata information to other libraries and applications that need it via a library interface. Nowadays, the only mainstream Haskell packages that don’t use Cabal are GHC itself (since it contains non-Haskell components; furthermore this would involve bootstrapping issues since a Haskell compiler is needed to build Cabal) and “legacy” libraries and tools such a Gtk2Hs (for which it is currently not possible to build using the Cabal framework for various reasons).

Cabal obtains its information from two two files:

  1. A .cabal file that contains a human-readable description of the package including dependencies, exported modules, any libraries and executables it builds, etc.
  2. A Setup.[l]hs file that is a valid Haskell program/script using the Cabal and which performs the actual configuration, building and installation of the package; for most packages this is a mere two lines long (one to import the Cabal library, the second that states that it uses the default build setup).

As such, Cabal is extremely elegant, especially when compared to ones such as Ant. However, it is not a valid specification format for packages that are part of a fully-fledged package management system: it cannot deal with all possibilities (e.g. Gtk2Hs) nor all dependencies (it allows you to state any C libraries needed at build time, but not for any libraries needed from other languages nor any other tools or libraries needed at run-time; for example my graphviz library for Haskell really needs the “real” Graphviz tool suite installed to work properly but there is no way of telling Cabal that).

Why not XML?

Some people have stated that Cabal should really switch to an XML-based file format because it is a “standard”. Even if we assume that XML really is such a well-defined standard (though we’d need to define a Schema for use with Cabal), XML has one large failing: it is not human readable. One of the greatest features of Cabal is that its file format is perfectly readable and understandable by humans (even if it is at times imprecisely defined in some aspects), such that if I have to check dependencies, etc. for a Cabalised package I can quickly skim through its .cabal file and just read it without having to decode it (especially since usage of XML would remove any need for newlines, etc. which are used for readability purposes). Furthermore, it would require Cabal to use an XML parser, which means an extra dependency (whereas at the moment it needs only a compiler).


The choice of both package and executable name for cabal-install was unfortunate (if understandable) in that too many people confuse it for Cabal. Whilst it may use Cabal and act as a wrapper around both it and HackageDB, it is indeed a completely separate package. So remember, whilst you may do cabal install xmonad, you’re not using Cabal to do that but rather cabal-install.

cabal-install brings a convenient command-line interface, dependency resolution and downloading to the Hackage ecosystem. Assuming that you have GHC install, cabal install xmonad will indeed determine, download and build all Haskell dependencies for XMonad. However, this is not always the case.

As a wrapper around Cabal and HackageDB, cabal-install inherits all of their reasons why it is not a valid package manager. However, to that it brings in a few warts of its own: it only manages libraries. That doesn’t mean that it can’t install applications, because it can; it’s just that once it has installed an application it can’t tell that it has done so. That’s because rather than have its own record of what it has installed and what is available, cabal-install uses GHC’s library manager ghc-pkg to determine which libraries are installed. Since GHC doesn’t know which applications are installed, neither does cabal-install. As such, if one tries to install haskell-src-exts (or any package that depends upon it) on a “virgin” Haskell system, then it will fail since the parser generator happy isn’t installed. cabal-install (or rather Cabal) can detect if happy is available, but will not automatically offer to download and install it for you. Whilst this might be more a temporary limitation (in the sense that no-one has yet added support for build-time tools to cabal-install’s dependency resolution system) rather than a problem with cabal-install, it still requires extra user intervention.

There is a yet more serious impediment to being able to properly consider cabal-install a package manager (since other package managers may also require user intervention at times when installation fails for some reason). Let us once again consider that definition of a package management system, this time with some emphasis added:

… a collection of tools to automate the process of installing, upgrading, configuring, and removing software packages from a computer.

Did everyone spot that not-so-subtle hint? cabal-install can’t un-install Haskell software. Why not? Partially because Cabal doesn’t support uninstallation: whilst Cabal can un-register a library from ghc-pkg, it won’t remove any files it installed. Furthermore, because cabal-install doesn’t track which applications it has installed, it is definitely unable to uninstall them since it has no idea which files it needs to delete.

Partially linked to this problem of uninstallation is another segment of that definition: upgrading. Whilst it can install a new version of a package, it cannot remove old versions. However, this isn’t why the cabal upgrade option has been disabled: GHC ships with several libraries upon which it itself depends upon; these are known as the boot libraries. Originally cabal-install offered to upgrade these libraries, with at times disastrous results.

But what if we fix cabal-install???

What if cabal-install starts recording which packages it installs and which files it installs? With that uninstallation will be possible, and if it can tell which libraries are boot libraries then upgrading should also be possible. As such, cabal-install could be considered a proper package management system, couldn’t it? Pretty please?

Unfortunately, no: as mentioned earlier, between them HackageDB and Cabal can only be used to install packages written in Haskell that are cabalised. As such packages cannot be closed under dependencies and cabal-install cannot install all necessary dependencies (both build-time and run-time).

Why you should use your distribution’s package management system

Many GNU/Linux users in the Haskell community express disdain for their distribution’s package management system and vehemently express their preference of cabal-install. Here are several reasons, however, that you should use your distribution’s package management system:

  • Proper dependencies: system packages can bring in all required dependencies, no matter what language they were written in, etc. How are they able to do this when cabal-install can’t? Because they are (hopefully) checked by the most clever computational device known: the human brain. Good system packages have all dependencies listed explicitly, and in the case of those that are marked as “stable” are usually tested on a larger number of machines, architectures and software configurations than the upstream developer is capable of.
  • Package patching: a common complaint with the current status of HackageDB and Cabal is that if there is a mistake with a package’s .cabal file (usually due to package maintainers being either too strict or too lax in terms of dependency version ranges), then users are forced to either manually download, edit and install (thus losing cabal-install’s dependency resolution) those packages or else wait for the package maintainer to release an update. System packages, however, are able to work around these problems by either providing ready-built binary versions of those packages or else patching the package to get it working. For example, when the duplicated instance problem arose between yi and data-accessor, I was able to edit the yi ebuild to remove its own instance definition so that it would clash with data-accessor’s: as such, Gentoo users who wanted to install yi wouldn’t even know such a problem existed, which is how it should be.
  • It Just Works: linked to the above point, system packages are more likely to install and work on the first attempt than using cabal-install, because they have (hopefully) been tested to do so within that particular distribution. This also includes choosing the correct compile-time flags, etc.
  • Integration: when using system packages, the Haskell packages you have installed are first class citizens of your machine, just like every other package. Applications are installed into standard directories, as are libraries, documentation, etc. They will also interact with all the other packages on your system better: for example, there are various wrapper scripts that come with different packages that wrap around darcs; by using a system install of darcs then these wrapper scripts will work, whereas they might not if it’s installed in your home directory.
  • Done for you: someone has put in the effort to write the system package for that particular Haskell package; why shouldn’t you be grateful for them for doing so and use it?

There are of course various reasons why people prefer not to use system packages for Haskell packages:

  • Out of date packages;
  • Limited variety of packages;
  • Not built with the wanted options.

However, there are two possible solutions to these problems. The first one is that if you are a serious Haskell hacker and your distribution doesn’t support Haskell well, then why not try another distribution? Arch and Gentoo are usually recognised as being those with the best Haskell support, with Fedora seeming to have decent Haskell support for a more release- and ready-to-go-oriented distribution.

Alternatively, get more involved with your distribution’s Haskell packaging team or start one if there isn’t one there already: get what you want in your distribution and help other people out at the same time. It usually isn’t hard to at least make unofficial packages: I started off writing Haskell ebuilds for Gentoo by copying and editing ones that were already there; nowadays we have our hackport tool (available at app-portage/hackport in the Haskell overlay) that generates most of the ebuild for you, especially for simple packages that don’t need much tweaking. Failing that, just ask for a new/updated Haskell package.

So is cabal-install useless?

Not at all: cabal-install still serves four useful purposes:

  1. Building and testing your own packages during development;
  2. On OSs without a package management system (e.g. Windows);
  3. You are unable to use your system’s package manager for some reason (e.g. need a custom build with different compile-time flags);
  4. You are unable to install system-wide packages on your work/university computer (which is the situation I face at uni, unless I take the “manage your own computer and if it breaks don’t come to us crying” approach).

However, I still strongly recommend you use your system package manager whenever possible.


I have tried to set out at least some of my reasonings about why I believe that cabal-install is not a package manager like so many people seem to believe, and that the Hackage ecosystem overall is not a valid package management system. Furthermore, I have covered why I believe Cabal should not switch to XML-based files for its metadata and why you should strongly consider using your OS’s package management system (if it has one) over installing packages by hand with cabal-install.

If nothing else, it is my sincere hope that this blog post will at least stop people talking about Cabal when they mean cabal-install.

Haskell, Uni

Now @ ANU

I’m now in my fourth week of a PhD in Computer Science at the Australian National University under the supervision of Professor Brendan McKay (of nauty fame). My topic is as yet still not completely defined, but at the moment it is to do with comparison of graph generation algorithms.


In January this year, I presented a paper on my SourceGraph program at the 2010 ACM SIGPLAN Workshop on Partial Evaluation and Program Manipulation (PEPM), a reprint of which is available here. For those of you who are interested, the slides are used (which were heavily edited the night before I gave my talk!) are also available. I would have made these available before now but as soon as I arrived back in Australia I was busy getting ready to move down to Canberra, and I only obtained internet access last week.

The experience was interesting: not only was this the first time I have presented at a professional conference, it was also the first time I had even attended one. What made it even more interesting was the large proportion of people who presented there who (like myself) were not from the actual targeted field (e.g. Florian Haftmann who presented a paper on Isabelle and Haskabelle is from a theorem-proving background).

I’m still working on SourceGraph (if you compare the paper to my Honour’s thesis you should be able to spot the improvements it has already); however, the fact that I actually have university work I’m meant to be doing is proving a hindrance.

Other Projects

Generic graph class

Some people have expressed interest in whether or not I’ve gotten anywhere on my generic graph class proposal. The answer is that I have a rough-and-ready form of one based upon initial discussions I had with Cale Gibbard that uses associated types to define various types such as the vertex type, vertex label type and arc label type. However, the main stumbling block at this stage is that I wanted to add a sub-class dealing with “mappable graphs”, i.e. those graph types that support applying a mapping function over the vertex and arc labels. The problem is, superclass equalities have not yet been implemented within GHC as of 6.12 (but maybe 6.14 will have them). In any case, if anyone is interested I can send them the current messy version, but even without the mappable graphs class it still requires at least GHC 6.10 for associated types support.


Noam Lewis (aka sinelaw) had asked me to add functionality to augment a list of DotGraphs using a single dot (or related command) process; however it turns out that for more than five DotGraphs this takes longer than doing each one individually. As far as I can tell, the problem relates to the rendering of the pretty-printed list of DotGraphs. As of now I’ve rolled back support for this, but I may try rendering to lazy text values, which will also let me avoid encoding problems by forcing usage of UTF-8.

I’ve also been asked by several people to add support for record labels; I’m doing so, but I got side-tracked into implementing proper support for HTML-like labels, which involves a lot of manual experimentation to determine what is valid syntax, etc.

Another area which involves manual experimentation: proper parsing of nodes and edges. Up until now, the library has assumed that there is at least a semi-colon between each individual statement in Dot code. This is not actually the case (which I found out when trying to parse the output of ghc-pkg dot), but when I tried to change this to assuming that each line ended in either a semicolon or a newline (I’m a mathematician, so or implies both as well!) then the edge a -> b was instead parsed as just the node a (with a parsing failure for the rest of the input). What makes it even better is that this is a valid Dot graph:

digraph { a b -> d c -> b -> a [color="red"] }

Now, how the hell am I meant to sensibly parse that?!?!? (Note: it isn’t the multi-edge c -> b -> a that’s the problem; the library can already cope with that; it’s just being able to tell when a particular statement ends and another begins). Bonus points if you can tell just by looking at it which edges are coloured red.


I’ll get around to adding re-register support, etc. to haskell-updater someday, I promise!


Command Input/Output and blocking

For version 2999.7.0.0 of my graphviz library, one of the improvements I’ve made was to re-write how to call the actual Graphviz command so that error messages could actually be reported to the user and to make it more robust (compare the old version to the new one). Duncan Coutts helped me out with this, by giving me a starting point from Cabals’ rawSystemStdout' function.

So I wrote the code, recorded a Darcs patch, then a few days later decided to actually test it (for some reason I didn’t seem to have actually tried using the function after writing it…). But whenever I tried to use it, I kept finding that the call to Graphviz would block: the input would seem to get consumed, but no output was generated. Duncan and I spent a few hours putting trace statements, etc. throughout before we eventually worked out what the problem was.

My challenge: without looking at the released version of the code that I’ve linked to above, try to find the problem part of the code below. There’s a crucial thing to remember here whenever making a system call to another executable. I’ve cleaned up and simplified the actual code I’ve put here, but otherwise its exactly what I had when I tried to work out why it wasn’t working. Note that hGetContents' is a strict version of hGetContents

-- Extract the output result from calling the given Graphviz command with the given Graphviz output type.
-- If the function call wasn't a success, return the error message.
graphvizWithHandle :: (PrintDot n) => GraphvizCommand -> DotGraph n
                      -> GraphvizOutput -> IO (Either String String)
graphvizWithHandle cmd gr t
  = handle notRunnable
    $ bracket
        (runInteractiveProcess cmd' args Nothing Nothing)
        (\(inh,outh,errh,_) -> hClose inh >> hClose outh >> hClose errh)
        $ \(inp,outp,errp,prc) -> do

          -- The input and error are text, not binary
          hSetBinaryMode inp False
          hSetBinaryMode errp False
          hSetBinaryMode outp $ isBinary t -- Depends on output type

          forkIO $ hPutStr inp (printDotGraph gr)

          -- Need to make sure both the output and error handles are
          -- really fully consumed.
          mvOutput <- newEmptyMVar
          mvErr    <- newEmptyMVar

          _ <- forkIO $ signalWhenDone hGetContents' errp mvErr
          _ <- forkIO $ signalWhenDone hGetContents' outp mvOutput

          -- When these are both able to be taken, then the forks are finished
          err    <- takeMVar mvErr
          output <- takeMVar mvOutput

          case exitCode of
            ExitSuccess -> return output
            _           -> return $ Left $ othErr ++ err
      notRunnable e@SomeException{} = return . Left $ unwords
                                      [ "Unable to call the Graphviz command "
                                      , cmd'
                                      , " with the arguments: "
                                      , unwords args
                                      , " because of: "
                                      , show e
      cmd' = showCmd cmd
      args = ["-T" ++ outputCall t]

      othErr = "Error messages from " ++ cmd' ++ ":\n"

If you can work out what the problem is, reply with a comment below. The first person to spot it gets absolutely nothing except the knowledge that they spotted it first… >_>

Haskell, Rants

The Problems with Graphviz

I am talking about the suite of graph visualisation tools rather than my bindings for Haskell (for which I use a lower-case g). These are problems I mostly came across whilst both using Graphviz and writing the bindings.

What is a valid identifier?

In the main language specification page for the Dot language, it is said that the following four types of values are accepted:

  • Any string of alphabetic ([a-zA-Z\200-\377]) characters, underscores ('_') or digits ([0-9]), not beginning with a digit;
  • a number [-]?(.[09]+ | [09]+(.[09]*)? );
  • any double-quoted string (“…”) possibly containing escaped quotes (\”);
  • an HTML string (<…>).

Note that quotes are the only escaped values accepted.

However, it isn’t clear what should happen if a number is used as a string value: does it need quotes or not? Furthermore, that page doesn’t specifically mention that keywords (graph, node, edge, etc.) need to be quoted when used as string values (it just says that compass points don’t have to be quoted).

What is a cluster?

The language specification page mentions that it is possible to have sub-graphs inside an overall graph, and that these sub-graphs can have optional identifiers. The Attributes page has mention of cluster attributes. But the only way to tell how define a cluster is to look at the examples page and notice that a sub-graph is a cluster if it has an ID that begins with cluster_ (with the underscore also appearing to be optional when playing with the Dot code manually). Furthermore, it isn’t specified that if you have more than one cluster, then they must have unique identifiers; it doesn’t even suffice to have two “main” clusters with identifiers of Foo and Bar, each with a sub-cluster with an identifier of Baz: the sub-clusters have to have unique identifiers as well; it took me a few hours to work this out.

If that isn’t bad enough, the fact that cluster_ is at the beginning of every cluster identifier means that the normal quotation, etc. rules for values doesn’t seem to work: a HTML identifier for a cluster now has the form of "cluster_<http://www.haskell.org>"; that’s right, it’s a URL prepended with a string and then wrapped in quotes! This plays merry hell with any attempts at properly generating and parsing identifiers for sub-graphs, especially when considering what happens to escaped quotes inside that string (my approach has been to do a two-level printing/parsing).

Poor/inconsitent documentation

In several cases, the documentation for Graphviz contradicts itself. Take output values for example: the official list of output types can be found here. Yet, if we look at the documentation for how to define a color value, we find it mentions a non-existent “mif” output type. Not only that, but there are apparently various renderers and formatters available for each output type; not only are these renderers and formatters not listed anywhere, it isn’t even explained what these renderers and formatters do (let alone what’s the difference between them). Furthermore, to make matters even interesting on my system I have at least one more output type (x11) than what is listed there.

Custom standards

Another annoying factor is how Graphviz treats named colors. The default colorscheme is to use X11 colors. However, if you compare Graphviz’s X11 colors to the “official” list (such as it is; there’s no real official standard, but most X11 implementations seem to use the same one) you’ll notice that they’re different: some colors have been added and others removed. I admit that it could arise from an older X11 implementation’s definition of X11 colors, but it prevented me from making a common library to use for X11 colors.

Assertion Madness

Every now and again, Graphviz fails to visualise a graph because an internal assertion failed; for example: dot: rank.c:237: cluster_leader: Assertion `((n)->u.UF_size <= 1) || (n == leader)' failed. This is extremely annoying, not least because even looking through the relevant source code doesn’t reveal what the problem is. If these assertions are really needed for some reason, please say why and what the actual problem is.

Getting help

I’m spoiled: #haskell is one of the largest IRC channels on Freenode, and the various Gentoo ones are usually rather large and helpful as well. Usually whenever I try to get help from #graphviz, I get no help; partially because there’s sometimes only two other people there, neither of whom respond (probably due to time zones).

There’s more

There are other niggles I’ve had with Graphviz, but these are the main big problems I’ve had that I can recall.

Overall, however, Graphviz is a great set of applications; unfortunately, they seem to be feeling their age (along with keeping a large number of deprecated items floating around for compatibility purposes).

Haskell, Rants

If wishes were tests, code would be perfect

(With apologies to wherever the original came from.)

As I’ve mentioned previously, I’m currently writing a QuickCheck-based test suite for graphviz. Overall, I’m quite pleased with QuickCheck, especially since the amount of moaning from people that QuickCheck-2 (which I’m using) is too different from the version 1.x series. The monadic usage of Gen for arbitrary means in most cases instances are just a matter of picking the right liftM function with multiple calls to arbitrary. However, in the course of using it, I’ve come across some observations/problems with QuickCheck.

Default Instances

Usually, it’s great to see a library define instances of its own classes for the common data types (that is, those available from the Prelude, etc.). However, I’m finding the the default instances of Arbitrary for lists (and to a lesser but related extent Chars) a pain. Specifically, how the shrink method is defined: it not only tries to shrink the size of the list (which is great) but also individually shrink each element in the list. My preferred behaviour (which I’ve defined a custom function that my code explicitly calls) is to just shrink the size of the list, unless it’s a singleton, in which case try to shrink that value.

The reason the default behaviour is so bad in my case is that I quite often have lists of custom data types, which can individually have lots of other sub-types in them, possibly with lists of their own. As such, if I used the default shrinking behaviour on lists, this could result in a lot of attempts at shrinking.

Note that this isn’t really a problem of QuickCheck per-se: it’s great that it defines an Arbitrary instance for lists; it would be great (but probably not type-safe, etc.) if it was possible to override class instances in Haskell.

Why lists anyway?

One of the problems with the shrinking behaviour of lists is due to the number of appends that occur; whilst lists are nicer/easier to deal with, using something like Seq from Data.Sequence might improve performance.

Getting big for its boots

In most of the tests that I’ve done, the problems that occur are usually in printing and parsing Attributes. As such, at the start of my test suite I run a test on lists of Attributes; to try and ensure that they’re valid I run 10000 tests rather than the default 100. These extra tests, however, come at a price: QuickCheck keeps testing longer and longer lists, which means that each individual test takes longer and longer to run. I’d prefer to run even more tests which are individually smaller (around the mid-point of what gets generated with 10000 tests); as it is, 10000 tests take over half an hour here.


Whilst the high-level details are explained rather well, there’s parts of the QuickCheck documentation that is rather lacking. First of all, how to use QuickCheck; I wasn’t aware that there was a community standard of starting the names of all properties with prop_ (though Real World Haskell deals relatively well with how to use QuickCheck). Also, it took me a while to dig through the (relatively un-documented) source to work out that a Result of “GaveUp{...}” is returned when too many values were discarded.

Keep going

You’ve found a value that breaks the property? Excellent (well, not in that it’s great to have a bug, but it’s great that it was picked up)! But can’t you please keep going and trying to find more?

Edit: one of the reasons I would like this behaviour is for when the test isn’t actually a failure per-se, it’s just a matter of my Arbitrary instances not being strict enough. For example, if it generates a String value that is actually a number (e.g. "1.2") and my data type can be either a Double or a String, then obviously this value should actually be parsed back as a Double; this though breaks the print . parse = id property in its most strictest sense. As such, if quickCheck kept going, then I could manually verify whether it is a bug or not and fix it (so that an arbitrary String isn’t actually a number) whilst it kept doing the rest of the tests.

Getting results

Related to the previous one: once quickCheck has found a data value that breaks a property, the only way of getting that value to manually determine why the property is breaking is to copy/paste it: whilst the output can be redirected, it’s an all or nothing affair of the entire output rather than just the data value itself. Even better if the Result data type was parametrised so that it could return the value in its Failure constructor, so that in my code I can manually write it to file using my wrapper script around the QuickCheck tests.

Recursive values

In graphviz, I have a DotGraph data type which contains a DotStatement value; this contains a list of DotSubGraph values, each of which contains a DotStatement value. As such, my initial implementation of Arbitrary for these data types resulted in large, deeply recursive structures even for “small” sample values; this resulted in making it almost impossible to track down the source of the problem that resulted in an error. As such, to solve this I’ve done the following:

  • Define an arbDotStatement :: Bool -> Gen DotStatement function which will only have a non-empty list of sub graphs if the boolean is True.
  • The Arbitrary instance for DotStatement has arbitrary = arbDotStatement True; that is, an arbitrary DotStatement value can contain DotSubGraphs.
  • The Arbitrary instance for DotSubGraphs uses arbDotStatement False to generate its DotStatement; that is, a DotSubGraph cannot have any DotSubGraphs of its own.

This results in an Arbitrary instance of any of these data types that won’t endlessly recurse and is thus easier to debug.

Brent Yorgey is doing work on testing functions that use recursive data types; that should also help in the future.


I think the inclusion of shrinking into QuickCheck is great, in how it helps find a minimal common case for a bug. I’ve found, however, that for large data types you need to be very careful how you implement the shrink method: I’ve found it useful to only shrink the sub-values that are most likely to have errors (that is, the Attributes) rather than checking every possible shrink of the integral node ID, etc.

How do you have 0.11043 of a shrink?

With shrinking, however, what does QuickCheck mean when it says something like 0.11043 shrinks? Is it trying to say how deeply its shrinking? Note that this doesn’t seem to be a real floating point number; it seems to be treated as Int . Int.

Haskell, Rants

Waddaya know, testing WORKS!

In my previous post (what? I’m doing another post just three days after my previous one? 😮 ), I mentioned that I was planning on adding QuickCheck support to graphviz. Last night, I finished implementing the Arbitrary instances for the various Attribute sub-types and did a brief test to see if it worked… and came across three bugs :s

Parsing my own dog-food

The property that I was testing was that parse . print == id; that is, graphviz should be able to parse back in its own generated code output and get the same result back. I decided to do a quick test on the Pos value type, as I figured this would be reasonably complex due to the usage of either points or splines. And yes, I was right that it was complex, as this revealed the following three bugs:

  • When printing the optional start and end points in a spline, they should be separated from each other and from the other points with spaces; I had used the pretty-printer <> combinator rather than <+> .
  • Lists of splines should have only semi-colons in between each spline and not a semicolon and a space: using hcat rather than hsep fixed this.
  • The parsing behaviour was initially to try parsing the Pos value as a point first and then a spline. However, if the spline didn’t contain an optional start or end point, then the parser would successfully parse the first point in the spline as a stand-alone point, and then choke on the space following it (or indeed, a spline consisting of a single point followed by another spline would also confuse the parser). Thus, testing for a spline-based position first fixed this.

Note that this is two printing-based problems and one parsing-based problem. The initial fix for the last bug, however, created another problem: as I alluded, a spline consisting of a single point is equivalent to a point, so all point-based positions would be parsed as a single spline consisting of a single point. The current parsing behaviour is now to only parse as a spline-based position, then convert it to a point-based position if necessary.

Taking into account that this is my first time using QuickCheck, I’m quite pleased with the results (not in the fact that I had bugs, but that it found them). I had read about them in Real World Haskell, as well as helping out with Tony Morris’ QuickCheck tutorial at the first ever meetup of the Brisbane Functional Programming Group (mainly in terms of Haskell syntax, etc. rather than QuickCheck in general), but that’s about it.

Rant about tests in packages

Duncan Coutts recently mentioned that QuickCheck is one of the packages that split HackageDB, due to the newer version 2 branch being incompatible with the (more popular) version 1 branch. My opinion is that this is a problem with how Haskell developers treat testing in their packages both from a user and from a distribution-packager point of view.

Let’s take hmatrix as an example. It uses both QuickCheck and HUnit for testing purposes. However, why does an end-user care about the tests, as long as the developer has run them? This introduces two compulsory dependencies to the package (which I have no problem with overall) that most people don’t need or care about. Some library developers include their tests (and the dependencies for those tests) in a separate executable that is disabled by default; however, due to how Cabal deals with this (Duncan has partially fixed the problem for Cabal-1.8), these “optional” dependencies are still required. I can think of several reasons why developers include the tests inside the main package in this way (listed in what I think is decreasing order of validity):

  • The tests use internal data structure implementations or functions that should not be publicly accessible.
  • The tests are also located inside the library as extra documentation about the properties of the library.
  • Convenience; everything is all bundled together, and if end users want to test the validity of the code it’s there for them.
  • Laziness: why should they bother separating it out when it makes it easier for them to do “cabal install” and run the test binary?

I myself have never run a test suite for a package that is not my own, and I wonder how many people actually do. I just find that this makes packaging libraries for Gentoo more difficult, and leads to the problems Duncan has been having on Hackage.

My approach

I have a darcs repository for graphviz, the location of which is listed in the Cabal file (and is displayed by Hackage). This darcs repository is publically available for anyone to get a copy of, so if people want to send me a patch with extra functionality they are able to get the latest stuff that I’m working on.

My testing files are located within this repository; the actual tests are defined an run in an external module from where the data structures and functions are defined. I can use it to test my code; if anyone else wants to test it they are able to grab it and do so. However, I am not going to include the testing module[s] within any releases of the library.

I believe that for many cases, this would be a much better approach on how to develop and distribute test suites. At the very least, if you are unable to extricate the tests from within your projects source files, using a pre-processor to remove them from the distributed tarballs might be a valid approach (I have no idea how hard/easy this would do though). This way, people who want to run the test suite can, and other people who trust the developers (like me for the most part) can do so without having to install dependencies that are for the most part useless.


Gentoo, Haskell, Uni

Past, Present and PEPM

I was planning on posting semi-regularly here, but I’ve been procrastinating way too much. It’s not that I don’t have anything to write about, it’s just that by the time I get on the computer I can usually find better things to do 😉

Anyway, this is kind of a summary update about what I’ve been working on and what I’m going to be doing.

The Graph Trifecta

I’m responsible for three graph-related Haskell packages on HackageDB: graphviz, Graphalyze and SourceGraph. These three packages are interrelated: graphviz is used by Graphalyze which is in turn used by SourceGraph (which also uses graphviz).


The graphviz provides bindings to the Graphviz (note the differences in capitalisation) suite of tools for visualising graphs. Well, OK, I’m not actually binding the actual C library, just generating the appropriate representation into Graphviz’s Dot code and calling the appropriate tool, but close enough. I’m hoping that it will eventually become the definitive way of using Graphviz in Haskell (as opposed to all the other packages that either provide “bindings” to Graphviz or do so internally). It does have some limitations, however:

  • Only supports conversion to/from FGL graphs for now, as there’s (as-yet) no standard way of considering an arbitrary graph data structure.
  • Does not support arbitrary placement of Dot statements but follows a specific order; I did ask on the Haskell mailing lists if anyone wanted this feature but was told resoundly “no”; I might add it as an option in a future release however.

It seems every release of graphviz introduces an API change. Well, OK, not all; there are bug-fix releases, etc., and I use the Haskell Package Versioning Policy (PVP) so releases that do break the API are obvious. Also, after the 2999.5.0.0 release, I’m trying to keep the API changes to a minimum whilst still improving the library. After all, it’s all part of the process of making A Better Library (TM) for everyone 😉

What I’m working on at the moment for graphviz is improved support for interaction with the actual Graphviz tools in how you call them and how you choose which format they’re created in. Rather than just indicating if the operation was successful or not and printing any error messages to stderror, it’s now using the Either datatype to return the actual error if there is one. The code isn’t perfect (a full disk exception will currently freeze it, etc.) but it’s getting there. I’ve also improved the image output support by allowing the programmer to use any (non-deprecated) image type that Graphviz officially supports and not just those that I happened to have had available on my system when the first version was written (note that there might be other types as well that aren’t listed in the official documentation; my install has support for a gv output type and I have no idea what it is); there is also the ability for the library to automatically add the correct file extension to the output filename. Canvas (i.e. make a window with the graph visualisation rather than create a file) output types are now also treated separately.

I’m also working on adding QuickCheck support to ensure that parse . print == id for all supported types. Note that it isn’t valid to consider this the other way round: graphviz doesn’t support arbitrary placement of statements in the Dot code, and it parses more liberally than it prints. This should hopefully find any lingering bugs I’ve got lying around in the parsing and printing side of things. The next step would then be to ensure that any generated Dot code is then able to be processed by Graphviz using the Dot (why does Graphviz have to call so many things “Dot”?) output type to add position information, etc. and have that fully parseable.

Also, documentation is gradually being improved; I’m never intending that users will be able to completely ignore the official Graphviz documentation; however, I’m trying to make it as obvious as possible the differences of opinions/terminology/etc. between Graphviz and what graphviz supports, as well as providing broad overviews of the different options and giving links to the appropriate upstream documentation.


Graphalyze was the main focus of my mathematics honours thesis last year. It provides a library to use graph-theory to analyse the relationships in discrete data sets. Nowadays, it mainly serves to do the heavy lifting for SourceGraph.

Work on Graphalyze is mainly to add extra graph algorithms that I want for SourceGraph. I’m also wanting to eventually replace the current reporting framework with a pretty-printing based one; possibly removing the pseudo-option of using something apart from Pandoc and just making it the default; this will definitely necessitate a licensing change for most of the library however (which I have no problems with, and it doesn’t appear to be used by anyone else…).


First of all, I was urged (by people who shall remain nameless unless they say its OK) to submit a paper based upon SourceGraph to Partial Evaluation and Program Manipulation 2010 (PEPM) which is going to be held in Madrid next January. I didn’t expect to get it expected (to be honest, there’s nothing that I think is that interesting/new in SourceGraph for the moment except for possibly the overall concept), but it gave me a figurative kick up the proverbial to actually update SourceGraph (which was my first ever real released Haskell program… 😮 ) and I thought that the actual process of writing the paper would be good practice. It turns out however that I was chosen, which resulted in a mad rush to update the paper (many thanks to quicksilver on #haskell for fixing up my Haskell terminology). So I now find myself preparing for my first ever real academic conference, let alone the first one that I’ll be presenting at! What makes the situation even more interesting is that I’m currently not a student (more on that later) and so the financial aspect will be interesting… I’ll be putting a version of my paper up here as soon as I fully understand the hoops I have to jump through to do so.

Whilst SourceGraph was just a sample application of Graphalyze in my honours thesis, it was why I’d originally thought up the topic; my supervisor convinced me to make the application aspects more general by splitting out the graph-theory side of things into a library and having several sample usages, of which I only ever ended up doing the one (mainly due to time constraints). It is now the driving force behind graphviz and Graphalyze (well, OK, users sending me bug reports/feature requests and my own sense of pride do help with graphviz; the latter was what prompted the re-writing of the printing/parsing layout whilst I was meant to be on holidays overseas visiting family…) and the focus (transitively or otherwise) of most of my Haskell hacking at the moment.

Anyway, I’m currently trying to improve the quality of the output produced by SourceGraph in terms of usage (by adding command line parameters) and in what kind of analyses it produces. I have kind of shot myself in the foot, however, as I’ve already updated it to use the as-yet-unreleased 1.8 series of Cabal, so I can’t release it (not that it’s anywhere close to being in a releasable state) until Cabal-1.8 is available…

Haskell in Gentoo

I’m going to do a proper post over on the official Gentoo-Haskell blog later on about most of this, but here’s a brief rundown on what I’ve been doing (along with kolmodin, trofi, etc.):

  • I’m still not an official dev, as I don’t have time/can’t be bothered (pick whichever you think is more accurate :p ) to finish off the dev quiz; I still help out on the IRC channel, answer (but not close) bugs on bugzilla, etc.
  • I’ve been doing some “spring-cleaning” in the overlay to try and remove old, unused packages (and versions of packages). I’m defining “unused” as “the version in the overlay is out of date, no-one has complained about it being out of date and I can’t be bothered checking through all the dependencies, etc.” 😉
  • Trying to get GHC 6.12 RC1 to work; whilst I’ve managed to build an x86 binary bootstrap package, I’m not able to actually use it to bootstrap, and if I try and install that with USE="binary", then it appears our hack for not installing haddock with GHC doesn’t seem to work anymore :s
  • haskell-updater has been updated to work with GHC 6.12/Cabal 1.8 \o/ . Of course, I can’t actually test this out fully as I’m not able to install 6.12 here… The underlying code has also been rewritten to make it easier to add extra options down the track.


After I finished my maths honours last year, I had been hoping to study at the University of Edinburgh this year (starting in September, which would have been convenient for going to ICFP), but whilst I was accepted I was unable to obtain any funding 😦 As such, I’ve been either working at the University of Queensland (doing some casual sysadmin-ing as well as developing material for and helping to run the new SCIE1000 subject) or working on a few odds and ends at home (that is, the projects mentioned above) all this year.

Because my dreams of studying overseas have been dashed, I’ve now applied to study at Australian National University in Canberra under the supervision of Professor Brendan McKay. Whilst I haven’t been officially accepted, I’ve been told that I’m guaranteed a spot (and at least some funding) due to my academic results for honours last year. I’ll be working on combinatorial algorithms, so nothing officially Haskelly in nature (but I’m intending on doing most – if not all – of my coding in my favourite programming language).


I mentioned previously that I’d be going to PEPM next year… will anyone else from the Haskell community be there (or at any of the other conferences associated with POPL)? I’ve as yet only had the pleasure of meeting one person (dibblego), and it’d be great if I could put some faces to some of the other names I see in #haskell or on the mailing lists…

That’s all folks

In case anyone cares, this is what I’ve been up to and am planning on doing for the most part. This blog post on the main was to try and get me writing here more often, as well as an attempt to stifle Joe Fredette’s continual complaints of lack of Haskell Weekly News material because I haven’t released anything recently 😉 So Joe, go ahead and write about this! :p